2026-03-10T13:31:40.814 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T13:31:40.818 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:31:40.845 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052 branch: squid description: orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '1052' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 1 ms bind msgr2: false ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ mon_bind_msgr2: false sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJ7QsVDuolMOUmtnxdd0jU0xr0EB1+/1PzSSLzNPZZgbbxxTemuXlvtAI57bH1r/kZaYPbsbTYS1K1764W1cLc= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN9cIi579zkFDNlnBCNj07QqfZg3jyLGakqqfbeo1mtH/qzQMrJI00ZI+Et0sLWF8PX+vFMhiQk+Fs8Y6H4MtI= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T13:31:40.845 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T13:31:40.846 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T13:31:40.846 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T13:31:40.846 INFO:teuthology.task.internal:Checking packages... 2026-03-10T13:31:40.846 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T13:31:40.846 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T13:31:40.846 INFO:teuthology.packaging:ref: None 2026-03-10T13:31:40.846 INFO:teuthology.packaging:tag: None 2026-03-10T13:31:40.846 INFO:teuthology.packaging:branch: squid 2026-03-10T13:31:40.846 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:31:40.847 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T13:31:41.579 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T13:31:41.580 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T13:31:41.581 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T13:31:41.581 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T13:31:41.581 INFO:teuthology.task.internal:Saving configuration 2026-03-10T13:31:41.586 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T13:31:41.587 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T13:31:41.594 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:30:26.812838', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJ7QsVDuolMOUmtnxdd0jU0xr0EB1+/1PzSSLzNPZZgbbxxTemuXlvtAI57bH1r/kZaYPbsbTYS1K1764W1cLc='} 2026-03-10T13:31:41.601 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:30:26.812399', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBN9cIi579zkFDNlnBCNj07QqfZg3jyLGakqqfbeo1mtH/qzQMrJI00ZI+Et0sLWF8PX+vFMhiQk+Fs8Y6H4MtI='} 2026-03-10T13:31:41.601 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T13:31:41.602 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T13:31:41.602 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-10T13:31:41.602 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T13:31:41.610 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-10T13:31:41.617 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-10T13:31:41.618 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7ff955282170>, signals=[15]) 2026-03-10T13:31:41.618 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T13:31:41.619 INFO:teuthology.task.internal:Opening connections... 2026-03-10T13:31:41.619 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-10T13:31:41.619 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:31:41.678 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T13:31:41.678 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:31:41.738 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T13:31:41.739 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-10T13:31:41.794 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-10T13:31:41.794 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T13:31:41.848 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T13:31:41.848 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-10T13:31:41.853 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T13:31:41.867 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T13:31:41.867 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T13:31:41.921 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T13:31:41.922 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T13:31:41.922 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T13:31:41.926 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T13:31:41.928 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T13:31:41.929 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T13:31:41.929 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:31:41.931 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:31:41.977 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T13:31:41.978 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T13:31:41.979 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:31:41.986 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:31:42.002 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:31:42.031 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:31:42.032 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T13:31:42.039 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-10T13:31:42.060 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:31:42.259 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T13:31:42.274 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:31:42.456 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T13:31:42.458 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T13:31:42.458 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:31:42.460 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:31:42.477 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T13:31:42.479 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T13:31:42.480 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T13:31:42.480 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:31:42.520 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:31:42.538 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T13:31:42.540 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T13:31:42.540 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:31:42.593 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:31:42.594 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:31:42.607 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:31:42.607 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:31:42.635 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:31:42.662 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:31:42.672 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:31:42.674 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:31:42.684 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:31:42.685 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T13:31:42.686 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T13:31:42.686 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:31:42.716 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:31:42.750 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T13:31:42.752 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T13:31:42.752 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:31:42.785 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:31:42.804 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:31:42.863 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:31:42.921 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:31:42.921 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:31:42.986 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:31:43.011 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:31:43.071 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:31:43.071 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:31:43.135 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-10T13:31:43.137 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T13:31:43.165 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:31:43.207 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T13:31:43.458 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T13:31:43.459 INFO:teuthology.task.internal:Starting timer... 2026-03-10T13:31:43.460 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T13:31:43.463 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T13:31:43.465 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0']} 2026-03-10T13:31:43.465 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-10T13:31:43.465 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T13:31:43.465 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T13:31:43.465 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T13:31:43.465 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T13:31:43.465 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T13:31:43.467 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T13:31:43.467 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T13:31:43.468 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T13:31:44.062 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T13:31:44.068 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T13:31:44.068 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryurenn0i8 --limit vm05.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T13:33:28.712 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm09.local')] 2026-03-10T13:33:28.712 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-10T13:33:28.713 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:33:28.776 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-10T13:33:28.856 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-10T13:33:28.856 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T13:33:28.856 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:33:28.919 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T13:33:28.994 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T13:33:28.994 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T13:33:28.997 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T13:33:28.997 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:33:28.997 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:33:28.999 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:33:28.999 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:33:29.037 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T13:33:29.057 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T13:33:29.077 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T13:33:29.087 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-10T13:33:29.093 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T13:33:29.101 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-10T13:33:29.121 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T13:33:29.130 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-10T13:33:29.142 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-10T13:33:29.145 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T13:33:29.164 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T13:33:29.183 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T13:33:29.202 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:33:29.235 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T13:33:29.454 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm05.stdout:^? ntp.kernfusion.at 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm05.stdout:^? 172-104-154-182.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm05.stdout:^? 148.0.90.77.hostbrr.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm05.stdout:^? sambuca.psychonet.co.uk 2 6 1 0 +197us[ +197us] +/- 27ms 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T13:33:29.455 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T13:33:29.456 INFO:teuthology.orchestra.run.vm09.stdout:^? 148.0.90.77.hostbrr.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.456 INFO:teuthology.orchestra.run.vm09.stdout:^? sambuca.psychonet.co.uk 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.456 INFO:teuthology.orchestra.run.vm09.stdout:^? ntp.kernfusion.at 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.456 INFO:teuthology.orchestra.run.vm09.stdout:^? 172-104-154-182.ip.linod> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T13:33:29.456 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T13:33:29.458 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T13:33:29.458 DEBUG:teuthology.orchestra.run.vm05:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T13:33:29.458 DEBUG:teuthology.orchestra.run.vm09:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T13:33:29.460 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf remove nvme-cli -y 2026-03-10T13:33:29.460 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:33:29.460 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.460 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.460 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm09.local 2026-03-10T13:33:29.460 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T13:33:29.460 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:33:29.460 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.460 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.499 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo dnf remove nvme-cli -y 2026-03-10T13:33:29.499 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:33:29.500 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.500 DEBUG:teuthology.task.pexec:ubuntu@vm05.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.500 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm05.local 2026-03-10T13:33:29.500 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T13:33:29.500 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T13:33:29.500 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.500 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T13:33:29.691 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: nvme-cli 2026-03-10T13:33:29.691 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T13:33:29.698 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T13:33:29.699 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T13:33:29.699 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T13:33:29.750 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: nvme-cli 2026-03-10T13:33:29.750 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T13:33:29.753 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T13:33:29.753 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T13:33:29.753 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T13:33:30.198 INFO:teuthology.orchestra.run.vm09.stdout:Last metadata expiration check: 0:01:13 ago on Tue 10 Mar 2026 01:32:17 PM UTC. 2026-03-10T13:33:30.263 INFO:teuthology.orchestra.run.vm05.stdout:Last metadata expiration check: 0:01:09 ago on Tue 10 Mar 2026 01:32:21 PM UTC. 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T13:33:30.336 INFO:teuthology.orchestra.run.vm09.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:Install 7 Packages 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 6.3 M 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:Installed size: 24 M 2026-03-10T13:33:30.337 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout:Install 7 Packages 2026-03-10T13:33:30.390 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:30.391 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 6.3 M 2026-03-10T13:33:30.391 INFO:teuthology.orchestra.run.vm05.stdout:Installed size: 24 M 2026-03-10T13:33:30.391 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-10T13:33:30.638 INFO:teuthology.orchestra.run.vm09.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 435 kB/s | 44 kB 00:00 2026-03-10T13:33:30.726 INFO:teuthology.orchestra.run.vm09.stdout:(2/7): python3-kmod-0.9-32.el9.x86_64.rpm 966 kB/s | 84 kB 00:00 2026-03-10T13:33:30.790 INFO:teuthology.orchestra.run.vm09.stdout:(3/7): python3-configshell-1.1.30-1.el9.noarch. 285 kB/s | 72 kB 00:00 2026-03-10T13:33:30.795 INFO:teuthology.orchestra.run.vm05.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 232 kB/s | 44 kB 00:00 2026-03-10T13:33:30.909 INFO:teuthology.orchestra.run.vm05.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 237 kB/s | 72 kB 00:00 2026-03-10T13:33:30.995 INFO:teuthology.orchestra.run.vm05.stdout:(3/7): nvme-cli-2.16-1.el9.x86_64.rpm 3.0 MB/s | 1.2 MB 00:00 2026-03-10T13:33:31.206 INFO:teuthology.orchestra.run.vm05.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 508 kB/s | 150 kB 00:00 2026-03-10T13:33:31.269 INFO:teuthology.orchestra.run.vm05.stdout:(5/7): python3-kmod-0.9-32.el9.x86_64.rpm 177 kB/s | 84 kB 00:00 2026-03-10T13:33:31.386 INFO:teuthology.orchestra.run.vm09.stdout:(4/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 1.4 MB/s | 837 kB 00:00 2026-03-10T13:33:31.706 INFO:teuthology.orchestra.run.vm09.stdout:(5/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 153 kB/s | 150 kB 00:00 2026-03-10T13:33:31.748 INFO:teuthology.orchestra.run.vm05.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 1.1 MB/s | 837 kB 00:00 2026-03-10T13:33:31.911 INFO:teuthology.orchestra.run.vm09.stdout:(6/7): runc-1.4.0-2.el9.x86_64.rpm 7.5 MB/s | 4.0 MB 00:00 2026-03-10T13:33:32.349 INFO:teuthology.orchestra.run.vm09.stdout:(7/7): nvme-cli-2.16-1.el9.x86_64.rpm 650 kB/s | 1.2 MB 00:01 2026-03-10T13:33:32.350 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:33:32.350 INFO:teuthology.orchestra.run.vm09.stdout:Total 3.1 MB/s | 6.3 MB 00:02 2026-03-10T13:33:32.448 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T13:33:32.456 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T13:33:32.456 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T13:33:32.533 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T13:33:32.533 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T13:33:32.730 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T13:33:32.743 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T13:33:32.756 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T13:33:32.766 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:33:32.776 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:33:32.778 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:33:32.841 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:33:33.006 INFO:teuthology.orchestra.run.vm09.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T13:33:33.011 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:33:33.416 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:33:33.416 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:33:33.416 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:34.146 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T13:33:34.146 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T13:33:34.146 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:33:34.146 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:33:34.147 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T13:33:34.147 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:34.347 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T13:33:34.622 DEBUG:teuthology.parallel:result is None 2026-03-10T13:33:35.218 INFO:teuthology.orchestra.run.vm05.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 1.0 MB/s | 4.0 MB 00:04 2026-03-10T13:33:35.220 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:33:35.220 INFO:teuthology.orchestra.run.vm05.stdout:Total 1.3 MB/s | 6.3 MB 00:04 2026-03-10T13:33:35.327 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T13:33:35.335 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T13:33:35.335 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T13:33:35.416 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T13:33:35.423 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T13:33:36.060 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T13:33:36.129 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T13:33:36.156 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T13:33:36.165 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:33:36.175 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:33:36.177 INFO:teuthology.orchestra.run.vm05.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:33:36.245 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T13:33:36.407 INFO:teuthology.orchestra.run.vm05.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T13:33:36.415 INFO:teuthology.orchestra.run.vm05.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:33:36.804 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T13:33:36.805 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:33:36.805 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T13:33:37.369 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:37.457 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T13:33:37.560 DEBUG:teuthology.parallel:result is None 2026-03-10T13:33:37.560 INFO:teuthology.run_tasks:Running task install... 2026-03-10T13:33:37.562 DEBUG:teuthology.task.install:project ceph 2026-03-10T13:33:37.562 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:33:37.562 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:33:37.562 INFO:teuthology.task.install:Using flavor: default 2026-03-10T13:33:37.565 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T13:33:37.565 INFO:teuthology.task.install:extra packages: [] 2026-03-10T13:33:37.565 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T13:33:37.565 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:33:37.565 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T13:33:37.565 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:33:38.149 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T13:33:38.150 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T13:33:38.224 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T13:33:38.224 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T13:33:38.708 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T13:33:38.708 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:33:38.708 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T13:33:38.749 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T13:33:38.749 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T13:33:38.765 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T13:33:38.765 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:33:38.765 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T13:33:38.799 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T13:33:38.799 DEBUG:teuthology.orchestra.run.vm09:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T13:33:38.822 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T13:33:38.873 DEBUG:teuthology.orchestra.run.vm09:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T13:33:38.914 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:33:38.950 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-10T13:33:38.951 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-10T13:33:38.955 DEBUG:teuthology.orchestra.run.vm09:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T13:33:39.026 INFO:teuthology.orchestra.run.vm09.stdout:check_obsoletes = 1 2026-03-10T13:33:39.028 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T13:33:39.160 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-10T13:33:39.188 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T13:33:39.199 INFO:teuthology.orchestra.run.vm09.stdout:41 files removed 2026-03-10T13:33:39.224 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T13:33:40.556 INFO:teuthology.orchestra.run.vm09.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-10T13:33:40.593 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-10T13:33:41.527 INFO:teuthology.orchestra.run.vm09.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T13:33:41.563 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T13:33:42.622 INFO:teuthology.orchestra.run.vm09.stdout:ceph source packages 1.8 kB/s | 1.9 kB 00:01 2026-03-10T13:33:42.628 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 1.8 kB/s | 1.9 kB 00:01 2026-03-10T13:33:43.690 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - BaseOS 8.5 MB/s | 8.9 MB 00:01 2026-03-10T13:33:43.799 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 7.7 MB/s | 8.9 MB 00:01 2026-03-10T13:33:45.563 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 25 MB/s | 27 MB 00:01 2026-03-10T13:33:45.637 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - AppStream 21 MB/s | 27 MB 00:01 2026-03-10T13:33:49.312 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 8.0 MB/s | 8.0 MB 00:00 2026-03-10T13:33:49.876 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - CRB 5.0 MB/s | 8.0 MB 00:01 2026-03-10T13:33:50.355 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 121 kB/s | 20 kB 00:00 2026-03-10T13:33:50.812 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 55 MB/s | 20 MB 00:00 2026-03-10T13:33:51.216 INFO:teuthology.orchestra.run.vm09.stdout:CentOS Stream 9 - Extras packages 46 kB/s | 20 kB 00:00 2026-03-10T13:33:52.100 INFO:teuthology.orchestra.run.vm09.stdout:Extra Packages for Enterprise Linux 25 MB/s | 20 MB 00:00 2026-03-10T13:33:55.529 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T13:33:56.588 INFO:teuthology.orchestra.run.vm09.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T13:33:56.896 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:33:56.897 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:33:56.900 INFO:teuthology.orchestra.run.vm05.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T13:33:56.901 INFO:teuthology.orchestra.run.vm05.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T13:33:56.928 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T13:33:56.933 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T13:33:56.934 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout:Install 134 Packages 2026-03-10T13:33:56.935 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-10T13:33:56.936 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:33:56.936 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 210 M 2026-03-10T13:33:56.936 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-10T13:33:57.983 INFO:teuthology.orchestra.run.vm09.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:33:57.984 INFO:teuthology.orchestra.run.vm09.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T13:33:57.988 INFO:teuthology.orchestra.run.vm09.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T13:33:57.988 INFO:teuthology.orchestra.run.vm09.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T13:33:58.019 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout:Upgrading: 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T13:33:58.024 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T13:33:58.025 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Installing weak dependencies: 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:====================================================================================== 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Install 134 Packages 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Upgrade 2 Packages 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 210 M 2026-03-10T13:33:58.026 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T13:33:58.679 INFO:teuthology.orchestra.run.vm05.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T13:33:59.269 INFO:teuthology.orchestra.run.vm09.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T13:33:59.510 INFO:teuthology.orchestra.run.vm05.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T13:33:59.632 INFO:teuthology.orchestra.run.vm05.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T13:34:00.063 INFO:teuthology.orchestra.run.vm09.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.5 MB/s | 1.2 MB 00:00 2026-03-10T13:34:00.178 INFO:teuthology.orchestra.run.vm09.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T13:34:00.198 INFO:teuthology.orchestra.run.vm05.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.8 MB/s | 5.5 MB 00:01 2026-03-10T13:34:00.231 INFO:teuthology.orchestra.run.vm05.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 4.0 MB/s | 2.4 MB 00:00 2026-03-10T13:34:00.433 INFO:teuthology.orchestra.run.vm05.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.6 MB/s | 1.1 MB 00:00 2026-03-10T13:34:00.751 INFO:teuthology.orchestra.run.vm09.stdout:(4/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 4.2 MB/s | 2.4 MB 00:00 2026-03-10T13:34:00.809 INFO:teuthology.orchestra.run.vm09.stdout:(5/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.8 MB/s | 5.5 MB 00:01 2026-03-10T13:34:00.984 INFO:teuthology.orchestra.run.vm09.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.6 MB/s | 1.1 MB 00:00 2026-03-10T13:34:01.086 INFO:teuthology.orchestra.run.vm05.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 5.5 MB/s | 4.7 MB 00:00 2026-03-10T13:34:01.539 INFO:teuthology.orchestra.run.vm09.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 6.5 MB/s | 4.7 MB 00:00 2026-03-10T13:34:02.014 INFO:teuthology.orchestra.run.vm05.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.7 MB/s | 22 MB 00:03 2026-03-10T13:34:02.129 INFO:teuthology.orchestra.run.vm05.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 219 kB/s | 25 kB 00:00 2026-03-10T13:34:02.349 INFO:teuthology.orchestra.run.vm05.stdout:(10/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 8.9 MB/s | 17 MB 00:01 2026-03-10T13:34:02.386 INFO:teuthology.orchestra.run.vm05.stdout:(11/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 8.3 MB/s | 11 MB 00:01 2026-03-10T13:34:02.466 INFO:teuthology.orchestra.run.vm05.stdout:(12/136): libcephfs-devel-19.2.3-678.ge911bdeb. 289 kB/s | 34 kB 00:00 2026-03-10T13:34:02.536 INFO:teuthology.orchestra.run.vm05.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 6.5 MB/s | 1.0 MB 00:00 2026-03-10T13:34:02.590 INFO:teuthology.orchestra.run.vm05.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T13:34:02.657 INFO:teuthology.orchestra.run.vm05.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T13:34:02.731 INFO:teuthology.orchestra.run.vm05.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 3.5 MB/s | 503 kB 00:00 2026-03-10T13:34:02.731 INFO:teuthology.orchestra.run.vm09.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.6 MB/s | 22 MB 00:03 2026-03-10T13:34:02.766 INFO:teuthology.orchestra.run.vm09.stdout:(9/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 8.8 MB/s | 11 MB 00:01 2026-03-10T13:34:02.853 INFO:teuthology.orchestra.run.vm09.stdout:(10/136): ceph-selinux-19.2.3-678.ge911bdeb.el9 205 kB/s | 25 kB 00:00 2026-03-10T13:34:02.853 INFO:teuthology.orchestra.run.vm05.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 368 kB/s | 45 kB 00:00 2026-03-10T13:34:02.977 INFO:teuthology.orchestra.run.vm09.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 273 kB/s | 34 kB 00:00 2026-03-10T13:34:02.978 INFO:teuthology.orchestra.run.vm05.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.1 MB/s | 142 kB 00:00 2026-03-10T13:34:03.032 INFO:teuthology.orchestra.run.vm09.stdout:(12/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 8.3 MB/s | 17 MB 00:02 2026-03-10T13:34:03.104 INFO:teuthology.orchestra.run.vm05.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.3 MB/s | 165 kB 00:00 2026-03-10T13:34:03.106 INFO:teuthology.orchestra.run.vm09.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.5 MB/s | 1.0 MB 00:00 2026-03-10T13:34:03.158 INFO:teuthology.orchestra.run.vm09.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T13:34:03.184 INFO:teuthology.orchestra.run.vm05.stdout:(20/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 10 MB/s | 5.4 MB 00:00 2026-03-10T13:34:03.223 INFO:teuthology.orchestra.run.vm05.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-10T13:34:03.225 INFO:teuthology.orchestra.run.vm09.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T13:34:03.277 INFO:teuthology.orchestra.run.vm09.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 4.1 MB/s | 503 kB 00:00 2026-03-10T13:34:03.318 INFO:teuthology.orchestra.run.vm05.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 303 kB 00:00 2026-03-10T13:34:03.339 INFO:teuthology.orchestra.run.vm05.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 858 kB/s | 100 kB 00:00 2026-03-10T13:34:03.391 INFO:teuthology.orchestra.run.vm09.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 395 kB/s | 45 kB 00:00 2026-03-10T13:34:03.438 INFO:teuthology.orchestra.run.vm05.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 709 kB/s | 85 kB 00:00 2026-03-10T13:34:03.535 INFO:teuthology.orchestra.run.vm09.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 995 kB/s | 142 kB 00:00 2026-03-10T13:34:03.575 INFO:teuthology.orchestra.run.vm05.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.2 MB/s | 171 kB 00:00 2026-03-10T13:34:03.615 INFO:teuthology.orchestra.run.vm05.stdout:(26/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 11 MB/s | 3.1 MB 00:00 2026-03-10T13:34:03.668 INFO:teuthology.orchestra.run.vm09.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.2 MB/s | 165 kB 00:00 2026-03-10T13:34:03.701 INFO:teuthology.orchestra.run.vm05.stdout:(27/136): ceph-grafana-dashboards-19.2.3-678.ge 248 kB/s | 31 kB 00:00 2026-03-10T13:34:03.740 INFO:teuthology.orchestra.run.vm05.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T13:34:03.741 INFO:teuthology.orchestra.run.vm09.stdout:(20/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 10 MB/s | 5.4 MB 00:00 2026-03-10T13:34:03.785 INFO:teuthology.orchestra.run.vm09.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-10T13:34:03.863 INFO:teuthology.orchestra.run.vm09.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-10T13:34:03.927 INFO:teuthology.orchestra.run.vm09.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 705 kB/s | 100 kB 00:00 2026-03-10T13:34:03.981 INFO:teuthology.orchestra.run.vm09.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 723 kB/s | 85 kB 00:00 2026-03-10T13:34:04.129 INFO:teuthology.orchestra.run.vm09.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.1 MB/s | 171 kB 00:00 2026-03-10T13:34:04.264 INFO:teuthology.orchestra.run.vm05.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 6.8 MB/s | 3.8 MB 00:00 2026-03-10T13:34:04.265 INFO:teuthology.orchestra.run.vm09.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 230 kB/s | 31 kB 00:00 2026-03-10T13:34:04.308 INFO:teuthology.orchestra.run.vm05.stdout:(30/136): ceph-mgr-diskprediction-local-19.2.3- 13 MB/s | 7.4 MB 00:00 2026-03-10T13:34:04.338 INFO:teuthology.orchestra.run.vm09.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 7.6 MB/s | 3.1 MB 00:00 2026-03-10T13:34:04.383 INFO:teuthology.orchestra.run.vm09.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T13:34:04.386 INFO:teuthology.orchestra.run.vm05.stdout:(31/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T13:34:04.425 INFO:teuthology.orchestra.run.vm05.stdout:(32/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 423 kB/s | 49 kB 00:00 2026-03-10T13:34:04.521 INFO:teuthology.orchestra.run.vm05.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 126 kB/s | 17 kB 00:00 2026-03-10T13:34:04.557 INFO:teuthology.orchestra.run.vm05.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 299 kB 00:00 2026-03-10T13:34:04.680 INFO:teuthology.orchestra.run.vm05.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 4.7 MB/s | 769 kB 00:00 2026-03-10T13:34:04.735 INFO:teuthology.orchestra.run.vm09.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 9.6 MB/s | 3.8 MB 00:00 2026-03-10T13:34:04.809 INFO:teuthology.orchestra.run.vm05.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 313 kB/s | 40 kB 00:00 2026-03-10T13:34:04.835 INFO:teuthology.orchestra.run.vm05.stdout:(37/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.2 MB/s | 351 kB 00:00 2026-03-10T13:34:04.852 INFO:teuthology.orchestra.run.vm09.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.1 MB/s | 253 kB 00:00 2026-03-10T13:34:04.910 INFO:teuthology.orchestra.run.vm05.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 715 kB/s | 72 kB 00:00 2026-03-10T13:34:04.911 INFO:teuthology.orchestra.run.vm09.stdout:(31/136): ceph-mgr-diskprediction-local-19.2.3- 14 MB/s | 7.4 MB 00:00 2026-03-10T13:34:04.967 INFO:teuthology.orchestra.run.vm09.stdout:(32/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 431 kB/s | 49 kB 00:00 2026-03-10T13:34:04.998 INFO:teuthology.orchestra.run.vm05.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 4.8 MB/s | 794 kB 00:00 2026-03-10T13:34:05.029 INFO:teuthology.orchestra.run.vm09.stdout:(33/136): ceph-prometheus-alerts-19.2.3-678.ge9 141 kB/s | 17 kB 00:00 2026-03-10T13:34:05.033 INFO:teuthology.orchestra.run.vm05.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 944 kB/s | 33 kB 00:00 2026-03-10T13:34:05.041 INFO:teuthology.orchestra.run.vm05.stdout:(41/136): libquadmath-11.5.0-14.el9.x86_64.rpm 1.4 MB/s | 184 kB 00:00 2026-03-10T13:34:05.084 INFO:teuthology.orchestra.run.vm09.stdout:(34/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 299 kB 00:00 2026-03-10T13:34:05.120 INFO:teuthology.orchestra.run.vm05.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.1 MB/s | 93 kB 00:00 2026-03-10T13:34:05.157 INFO:teuthology.orchestra.run.vm09.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.9 MB/s | 769 kB 00:00 2026-03-10T13:34:05.181 INFO:teuthology.orchestra.run.vm05.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.8 MB/s | 253 kB 00:00 2026-03-10T13:34:05.223 INFO:teuthology.orchestra.run.vm05.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 2.5 MB/s | 106 kB 00:00 2026-03-10T13:34:05.242 INFO:teuthology.orchestra.run.vm09.stdout:(36/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 2.2 MB/s | 351 kB 00:00 2026-03-10T13:34:05.250 INFO:teuthology.orchestra.run.vm09.stdout:(37/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 433 kB/s | 40 kB 00:00 2026-03-10T13:34:05.278 INFO:teuthology.orchestra.run.vm09.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 2.0 MB/s | 72 kB 00:00 2026-03-10T13:34:05.278 INFO:teuthology.orchestra.run.vm05.stdout:(45/136): python3-cryptography-36.0.1-5.el9.x86 7.9 MB/s | 1.2 MB 00:00 2026-03-10T13:34:05.291 INFO:teuthology.orchestra.run.vm05.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 2.0 MB/s | 135 kB 00:00 2026-03-10T13:34:05.313 INFO:teuthology.orchestra.run.vm09.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 5.1 MB/s | 184 kB 00:00 2026-03-10T13:34:05.330 INFO:teuthology.orchestra.run.vm09.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 2.0 MB/s | 33 kB 00:00 2026-03-10T13:34:05.336 INFO:teuthology.orchestra.run.vm09.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 9.0 MB/s | 794 kB 00:00 2026-03-10T13:34:05.342 INFO:teuthology.orchestra.run.vm05.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 1.9 MB/s | 126 kB 00:00 2026-03-10T13:34:05.370 INFO:teuthology.orchestra.run.vm09.stdout:(42/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 7.3 MB/s | 253 kB 00:00 2026-03-10T13:34:05.382 INFO:teuthology.orchestra.run.vm05.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 2.3 MB/s | 218 kB 00:00 2026-03-10T13:34:05.395 INFO:teuthology.orchestra.run.vm09.stdout:(43/136): pciutils-3.7.0-7.el9.x86_64.rpm 1.4 MB/s | 93 kB 00:00 2026-03-10T13:34:05.405 INFO:teuthology.orchestra.run.vm05.stdout:(49/136): unzip-6.0-59.el9.x86_64.rpm 2.8 MB/s | 182 kB 00:00 2026-03-10T13:34:05.433 INFO:teuthology.orchestra.run.vm09.stdout:(44/136): python3-ply-3.11-14.el9.noarch.rpm 2.8 MB/s | 106 kB 00:00 2026-03-10T13:34:05.461 INFO:teuthology.orchestra.run.vm09.stdout:(45/136): python3-pycparser-2.20-6.el9.noarch.r 4.8 MB/s | 135 kB 00:00 2026-03-10T13:34:05.466 INFO:teuthology.orchestra.run.vm09.stdout:(46/136): python3-cryptography-36.0.1-5.el9.x86 13 MB/s | 1.2 MB 00:00 2026-03-10T13:34:05.469 INFO:teuthology.orchestra.run.vm05.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 3.0 MB/s | 266 kB 00:00 2026-03-10T13:34:05.493 INFO:teuthology.orchestra.run.vm09.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 3.9 MB/s | 126 kB 00:00 2026-03-10T13:34:05.494 INFO:teuthology.orchestra.run.vm09.stdout:(48/136): python3-urllib3-1.26.5-7.el9.noarch.r 7.8 MB/s | 218 kB 00:00 2026-03-10T13:34:05.514 INFO:teuthology.orchestra.run.vm09.stdout:(49/136): zip-3.0-35.el9.x86_64.rpm 13 MB/s | 266 kB 00:00 2026-03-10T13:34:05.522 INFO:teuthology.orchestra.run.vm09.stdout:(50/136): unzip-6.0-59.el9.x86_64.rpm 6.1 MB/s | 182 kB 00:00 2026-03-10T13:34:05.636 INFO:teuthology.orchestra.run.vm05.stdout:(51/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 14 MB/s | 50 MB 00:03 2026-03-10T13:34:05.660 INFO:teuthology.orchestra.run.vm05.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 409 kB/s | 104 kB 00:00 2026-03-10T13:34:05.683 INFO:teuthology.orchestra.run.vm05.stdout:(53/136): flexiblas-3.0.4-9.el9.x86_64.rpm 139 kB/s | 30 kB 00:00 2026-03-10T13:34:05.683 INFO:teuthology.orchestra.run.vm05.stdout:(54/136): flexiblas-openblas-openmp-3.0.4-9.el9 634 kB/s | 15 kB 00:00 2026-03-10T13:34:05.740 INFO:teuthology.orchestra.run.vm05.stdout:(55/136): libnbd-1.20.3-4.el9.x86_64.rpm 2.8 MB/s | 164 kB 00:00 2026-03-10T13:34:05.742 INFO:teuthology.orchestra.run.vm05.stdout:(56/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 2.7 MB/s | 160 kB 00:00 2026-03-10T13:34:05.763 INFO:teuthology.orchestra.run.vm05.stdout:(57/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.9 MB/s | 45 kB 00:00 2026-03-10T13:34:05.808 INFO:teuthology.orchestra.run.vm05.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 5.4 MB/s | 246 kB 00:00 2026-03-10T13:34:05.818 INFO:teuthology.orchestra.run.vm05.stdout:(59/136): librdkafka-1.6.1-102.el9.x86_64.rpm 8.5 MB/s | 662 kB 00:00 2026-03-10T13:34:05.841 INFO:teuthology.orchestra.run.vm05.stdout:(60/136): libxslt-1.1.34-12.el9.x86_64.rpm 6.9 MB/s | 233 kB 00:00 2026-03-10T13:34:05.854 INFO:teuthology.orchestra.run.vm05.stdout:(61/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 14 MB/s | 3.0 MB 00:00 2026-03-10T13:34:05.856 INFO:teuthology.orchestra.run.vm05.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 7.5 MB/s | 292 kB 00:00 2026-03-10T13:34:05.866 INFO:teuthology.orchestra.run.vm05.stdout:(63/136): lua-5.4.4-4.el9.x86_64.rpm 7.5 MB/s | 188 kB 00:00 2026-03-10T13:34:05.879 INFO:teuthology.orchestra.run.vm05.stdout:(64/136): openblas-0.3.29-1.el9.x86_64.rpm 1.6 MB/s | 42 kB 00:00 2026-03-10T13:34:05.887 INFO:teuthology.orchestra.run.vm09.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 81 kB/s | 30 kB 00:00 2026-03-10T13:34:05.920 INFO:teuthology.orchestra.run.vm05.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 19 MB/s | 1.0 MB 00:00 2026-03-10T13:34:06.003 INFO:teuthology.orchestra.run.vm05.stdout:(66/136): openblas-openmp-0.3.29-1.el9.x86_64.r 36 MB/s | 5.3 MB 00:00 2026-03-10T13:34:06.033 INFO:teuthology.orchestra.run.vm05.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 8.3 MB/s | 249 kB 00:00 2026-03-10T13:34:06.033 INFO:teuthology.orchestra.run.vm09.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 201 kB/s | 104 kB 00:00 2026-03-10T13:34:06.058 INFO:teuthology.orchestra.run.vm05.stdout:(68/136): python3-babel-2.9.1-2.el9.noarch.rpm 33 MB/s | 6.0 MB 00:00 2026-03-10T13:34:06.065 INFO:teuthology.orchestra.run.vm05.stdout:(69/136): python3-jmespath-1.0.1-1.el9.noarch.r 1.4 MB/s | 48 kB 00:00 2026-03-10T13:34:06.088 INFO:teuthology.orchestra.run.vm05.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 5.8 MB/s | 177 kB 00:00 2026-03-10T13:34:06.090 INFO:teuthology.orchestra.run.vm05.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 6.7 MB/s | 172 kB 00:00 2026-03-10T13:34:06.102 INFO:teuthology.orchestra.run.vm05.stdout:(72/136): python3-devel-3.9.25-3.el9.x86_64.rpm 1.3 MB/s | 244 kB 00:00 2026-03-10T13:34:06.114 INFO:teuthology.orchestra.run.vm05.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 1.3 MB/s | 35 kB 00:00 2026-03-10T13:34:06.161 INFO:teuthology.orchestra.run.vm05.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 1.6 MB/s | 77 kB 00:00 2026-03-10T13:34:06.193 INFO:teuthology.orchestra.run.vm05.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 8.2 MB/s | 267 kB 00:00 2026-03-10T13:34:06.212 INFO:teuthology.orchestra.run.vm09.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 83 kB/s | 15 kB 00:00 2026-03-10T13:34:06.229 INFO:teuthology.orchestra.run.vm05.stdout:(76/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 44 MB/s | 6.1 MB 00:00 2026-03-10T13:34:06.231 INFO:teuthology.orchestra.run.vm05.stdout:(77/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.1 MB/s | 157 kB 00:00 2026-03-10T13:34:06.260 INFO:teuthology.orchestra.run.vm05.stdout:(78/136): python3-pyasn1-modules-0.4.8-7.el9.no 8.9 MB/s | 277 kB 00:00 2026-03-10T13:34:06.264 INFO:teuthology.orchestra.run.vm05.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 1.6 MB/s | 54 kB 00:00 2026-03-10T13:34:06.309 INFO:teuthology.orchestra.run.vm05.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 925 kB/s | 42 kB 00:00 2026-03-10T13:34:06.329 INFO:teuthology.orchestra.run.vm05.stdout:(81/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 1.9 MB/s | 442 kB 00:00 2026-03-10T13:34:06.357 INFO:teuthology.orchestra.run.vm05.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 1.3 MB/s | 37 kB 00:00 2026-03-10T13:34:06.446 INFO:teuthology.orchestra.run.vm05.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 750 kB/s | 66 kB 00:00 2026-03-10T13:34:06.475 INFO:teuthology.orchestra.run.vm09.stdout:(54/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 13 MB/s | 50 MB 00:03 2026-03-10T13:34:06.477 INFO:teuthology.orchestra.run.vm09.stdout:(55/136): libnbd-1.20.3-4.el9.x86_64.rpm 620 kB/s | 164 kB 00:00 2026-03-10T13:34:06.488 INFO:teuthology.orchestra.run.vm05.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 7.1 MB/s | 303 kB 00:00 2026-03-10T13:34:06.489 INFO:teuthology.orchestra.run.vm05.stdout:(85/136): qatlib-25.08.0-2.el9.x86_64.rpm 1.3 MB/s | 240 kB 00:00 2026-03-10T13:34:06.515 INFO:teuthology.orchestra.run.vm05.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.3 MB/s | 64 kB 00:00 2026-03-10T13:34:06.575 INFO:teuthology.orchestra.run.vm05.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 259 kB/s | 22 kB 00:00 2026-03-10T13:34:06.654 INFO:teuthology.orchestra.run.vm09.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 255 kB/s | 45 kB 00:00 2026-03-10T13:34:06.674 INFO:teuthology.orchestra.run.vm05.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 5.5 MB/s | 551 kB 00:00 2026-03-10T13:34:06.679 INFO:teuthology.orchestra.run.vm09.stdout:(57/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 3.8 MB/s | 3.0 MB 00:00 2026-03-10T13:34:06.708 INFO:teuthology.orchestra.run.vm05.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 8.7 MB/s | 308 kB 00:00 2026-03-10T13:34:06.715 INFO:teuthology.orchestra.run.vm05.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 2.8 MB/s | 19 kB 00:00 2026-03-10T13:34:06.802 INFO:teuthology.orchestra.run.vm09.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.0 MB/s | 246 kB 00:00 2026-03-10T13:34:06.802 INFO:teuthology.orchestra.run.vm05.stdout:(91/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 36 MB/s | 19 MB 00:00 2026-03-10T13:34:06.806 INFO:teuthology.orchestra.run.vm05.stdout:(92/136): protobuf-compiler-3.14.0-17.el9.x86_6 2.9 MB/s | 862 kB 00:00 2026-03-10T13:34:06.811 INFO:teuthology.orchestra.run.vm05.stdout:(93/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 2.9 MB/s | 25 kB 00:00 2026-03-10T13:34:06.812 INFO:teuthology.orchestra.run.vm05.stdout:(94/136): liboath-2.6.12-1.el9.x86_64.rpm 8.5 MB/s | 49 kB 00:00 2026-03-10T13:34:06.817 INFO:teuthology.orchestra.run.vm05.stdout:(95/136): libunwind-1.6.2-1.el9.x86_64.rpm 12 MB/s | 67 kB 00:00 2026-03-10T13:34:06.818 INFO:teuthology.orchestra.run.vm05.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 26 MB/s | 151 kB 00:00 2026-03-10T13:34:06.840 INFO:teuthology.orchestra.run.vm05.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 25 MB/s | 548 kB 00:00 2026-03-10T13:34:06.843 INFO:teuthology.orchestra.run.vm05.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 9.9 MB/s | 29 kB 00:00 2026-03-10T13:34:06.846 INFO:teuthology.orchestra.run.vm05.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 20 MB/s | 60 kB 00:00 2026-03-10T13:34:06.850 INFO:teuthology.orchestra.run.vm05.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 14 MB/s | 43 kB 00:00 2026-03-10T13:34:06.852 INFO:teuthology.orchestra.run.vm05.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 11 MB/s | 32 kB 00:00 2026-03-10T13:34:06.855 INFO:teuthology.orchestra.run.vm05.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 5.1 MB/s | 14 kB 00:00 2026-03-10T13:34:06.860 INFO:teuthology.orchestra.run.vm05.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 37 MB/s | 173 kB 00:00 2026-03-10T13:34:06.866 INFO:teuthology.orchestra.run.vm05.stdout:(104/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 17 MB/s | 838 kB 00:00 2026-03-10T13:34:06.868 INFO:teuthology.orchestra.run.vm05.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 46 MB/s | 358 kB 00:00 2026-03-10T13:34:06.880 INFO:teuthology.orchestra.run.vm05.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 18 MB/s | 254 kB 00:00 2026-03-10T13:34:06.891 INFO:teuthology.orchestra.run.vm05.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 13 MB/s | 144 kB 00:00 2026-03-10T13:34:06.903 INFO:teuthology.orchestra.run.vm05.stdout:(108/136): python3-grpcio-1.46.7-10.el9.x86_64. 59 MB/s | 2.0 MB 00:00 2026-03-10T13:34:06.903 INFO:teuthology.orchestra.run.vm05.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 879 kB/s | 11 kB 00:00 2026-03-10T13:34:06.908 INFO:teuthology.orchestra.run.vm09.stdout:(59/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 370 kB/s | 160 kB 00:00 2026-03-10T13:34:06.917 INFO:teuthology.orchestra.run.vm09.stdout:(60/136): librdkafka-1.6.1-102.el9.x86_64.rpm 2.5 MB/s | 662 kB 00:00 2026-03-10T13:34:06.917 INFO:teuthology.orchestra.run.vm05.stdout:(110/136): libarrow-9.0.0-15.el9.x86_64.rpm 22 MB/s | 4.4 MB 00:00 2026-03-10T13:34:06.918 INFO:teuthology.orchestra.run.vm09.stdout:(61/136): libxslt-1.1.34-12.el9.x86_64.rpm 2.0 MB/s | 233 kB 00:00 2026-03-10T13:34:06.918 INFO:teuthology.orchestra.run.vm05.stdout:(111/136): python3-jaraco-classes-3.2.1-5.el9.n 1.2 MB/s | 18 kB 00:00 2026-03-10T13:34:06.919 INFO:teuthology.orchestra.run.vm05.stdout:(112/136): python3-jaraco-collections-3.0.0-8.e 1.5 MB/s | 23 kB 00:00 2026-03-10T13:34:06.920 INFO:teuthology.orchestra.run.vm05.stdout:(113/136): python3-jaraco-context-6.0.1-3.el9.n 7.9 MB/s | 20 kB 00:00 2026-03-10T13:34:06.922 INFO:teuthology.orchestra.run.vm05.stdout:(114/136): python3-jaraco-functools-3.5.0-2.el9 6.8 MB/s | 19 kB 00:00 2026-03-10T13:34:06.922 INFO:teuthology.orchestra.run.vm05.stdout:(115/136): python3-jaraco-text-4.0.0-2.el9.noar 7.5 MB/s | 26 kB 00:00 2026-03-10T13:34:06.925 INFO:teuthology.orchestra.run.vm05.stdout:(116/136): python3-logutils-0.3.5-21.el9.noarch 16 MB/s | 46 kB 00:00 2026-03-10T13:34:06.926 INFO:teuthology.orchestra.run.vm05.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 22 MB/s | 79 kB 00:00 2026-03-10T13:34:06.928 INFO:teuthology.orchestra.run.vm05.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 21 MB/s | 58 kB 00:00 2026-03-10T13:34:06.932 INFO:teuthology.orchestra.run.vm05.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 49 MB/s | 272 kB 00:00 2026-03-10T13:34:06.932 INFO:teuthology.orchestra.run.vm05.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 3.8 MB/s | 16 kB 00:00 2026-03-10T13:34:06.935 INFO:teuthology.orchestra.run.vm05.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 28 MB/s | 90 kB 00:00 2026-03-10T13:34:06.936 INFO:teuthology.orchestra.run.vm05.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 7.7 MB/s | 31 kB 00:00 2026-03-10T13:34:06.940 INFO:teuthology.orchestra.run.vm05.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 44 MB/s | 188 kB 00:00 2026-03-10T13:34:06.941 INFO:teuthology.orchestra.run.vm05.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 14 MB/s | 59 kB 00:00 2026-03-10T13:34:06.942 INFO:teuthology.orchestra.run.vm05.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 14 MB/s | 36 kB 00:00 2026-03-10T13:34:06.944 INFO:teuthology.orchestra.run.vm05.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 27 MB/s | 86 kB 00:00 2026-03-10T13:34:06.947 INFO:teuthology.orchestra.run.vm05.stdout:(127/136): python3-kubernetes-26.1.0-3.el9.noar 38 MB/s | 1.0 MB 00:00 2026-03-10T13:34:06.948 INFO:teuthology.orchestra.run.vm05.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 24 MB/s | 90 kB 00:00 2026-03-10T13:34:06.949 INFO:teuthology.orchestra.run.vm05.stdout:(129/136): python3-webob-1.8.8-2.el9.noarch.rpm 35 MB/s | 230 kB 00:00 2026-03-10T13:34:06.950 INFO:teuthology.orchestra.run.vm05.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 7.9 MB/s | 22 kB 00:00 2026-03-10T13:34:06.951 INFO:teuthology.orchestra.run.vm05.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 7.9 MB/s | 20 kB 00:00 2026-03-10T13:34:06.960 INFO:teuthology.orchestra.run.vm05.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 20 MB/s | 191 kB 00:00 2026-03-10T13:34:06.961 INFO:teuthology.orchestra.run.vm05.stdout:(133/136): python3-werkzeug-2.0.3-3.el9.1.noarc 29 MB/s | 427 kB 00:00 2026-03-10T13:34:06.974 INFO:teuthology.orchestra.run.vm05.stdout:(134/136): thrift-0.15.0-4.el9.x86_64.rpm 73 MB/s | 1.6 MB 00:00 2026-03-10T13:34:07.042 INFO:teuthology.orchestra.run.vm09.stdout:(62/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 2.1 MB/s | 292 kB 00:00 2026-03-10T13:34:07.053 INFO:teuthology.orchestra.run.vm09.stdout:(63/136): openblas-0.3.29-1.el9.x86_64.rpm 314 kB/s | 42 kB 00:00 2026-03-10T13:34:07.057 INFO:teuthology.orchestra.run.vm09.stdout:(64/136): lua-5.4.4-4.el9.x86_64.rpm 1.3 MB/s | 188 kB 00:00 2026-03-10T13:34:07.423 INFO:teuthology.orchestra.run.vm09.stdout:(65/136): protobuf-3.14.0-17.el9.x86_64.rpm 2.7 MB/s | 1.0 MB 00:00 2026-03-10T13:34:07.478 INFO:teuthology.orchestra.run.vm09.stdout:(66/136): openblas-openmp-0.3.29-1.el9.x86_64.r 12 MB/s | 5.3 MB 00:00 2026-03-10T13:34:07.562 INFO:teuthology.orchestra.run.vm09.stdout:(67/136): python3-babel-2.9.1-2.el9.noarch.rpm 12 MB/s | 6.0 MB 00:00 2026-03-10T13:34:07.576 INFO:teuthology.orchestra.run.vm09.stdout:(68/136): python3-jinja2-2.11.3-8.el9.noarch.rp 2.5 MB/s | 249 kB 00:00 2026-03-10T13:34:07.577 INFO:teuthology.orchestra.run.vm09.stdout:(69/136): python3-devel-3.9.25-3.el9.x86_64.rpm 1.5 MB/s | 244 kB 00:00 2026-03-10T13:34:07.709 INFO:teuthology.orchestra.run.vm09.stdout:(70/136): python3-jmespath-1.0.1-1.el9.noarch.r 326 kB/s | 48 kB 00:00 2026-03-10T13:34:07.725 INFO:teuthology.orchestra.run.vm09.stdout:(71/136): python3-libstoragemgmt-1.10.1-1.el9.x 1.2 MB/s | 177 kB 00:00 2026-03-10T13:34:07.738 INFO:teuthology.orchestra.run.vm09.stdout:(72/136): python3-mako-1.1.4-6.el9.noarch.rpm 1.0 MB/s | 172 kB 00:00 2026-03-10T13:34:07.758 INFO:teuthology.orchestra.run.vm05.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 4.0 MB/s | 3.2 MB 00:00 2026-03-10T13:34:07.795 INFO:teuthology.orchestra.run.vm09.stdout:(73/136): python3-markupsafe-1.1.1-12.el9.x86_6 403 kB/s | 35 kB 00:00 2026-03-10T13:34:07.901 INFO:teuthology.orchestra.run.vm09.stdout:(74/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 2.6 MB/s | 442 kB 00:00 2026-03-10T13:34:07.908 INFO:teuthology.orchestra.run.vm09.stdout:(75/136): python3-packaging-20.9-5.el9.noarch.r 681 kB/s | 77 kB 00:00 2026-03-10T13:34:07.924 INFO:teuthology.orchestra.run.vm05.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.6 MB/s | 3.4 MB 00:00 2026-03-10T13:34:07.928 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:34:07.928 INFO:teuthology.orchestra.run.vm05.stdout:Total 19 MB/s | 210 MB 00:10 2026-03-10T13:34:08.078 INFO:teuthology.orchestra.run.vm09.stdout:(76/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 17 MB/s | 6.1 MB 00:00 2026-03-10T13:34:08.080 INFO:teuthology.orchestra.run.vm09.stdout:(77/136): python3-protobuf-3.14.0-17.el9.noarch 1.5 MB/s | 267 kB 00:00 2026-03-10T13:34:08.097 INFO:teuthology.orchestra.run.vm09.stdout:(78/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 834 kB/s | 157 kB 00:00 2026-03-10T13:34:08.238 INFO:teuthology.orchestra.run.vm09.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 340 kB/s | 54 kB 00:00 2026-03-10T13:34:08.243 INFO:teuthology.orchestra.run.vm09.stdout:(80/136): python3-pyasn1-modules-0.4.8-7.el9.no 1.6 MB/s | 277 kB 00:00 2026-03-10T13:34:08.352 INFO:teuthology.orchestra.run.vm09.stdout:(81/136): python3-toml-0.10.2-6.el9.noarch.rpm 367 kB/s | 42 kB 00:00 2026-03-10T13:34:08.400 INFO:teuthology.orchestra.run.vm09.stdout:(82/136): qatlib-25.08.0-2.el9.x86_64.rpm 1.5 MB/s | 240 kB 00:00 2026-03-10T13:34:08.445 INFO:teuthology.orchestra.run.vm09.stdout:(83/136): qatlib-service-25.08.0-2.el9.x86_64.r 397 kB/s | 37 kB 00:00 2026-03-10T13:34:08.547 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T13:34:08.554 INFO:teuthology.orchestra.run.vm09.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 429 kB/s | 66 kB 00:00 2026-03-10T13:34:08.600 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T13:34:08.600 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T13:34:08.601 INFO:teuthology.orchestra.run.vm09.stdout:(85/136): socat-1.7.4.1-8.el9.x86_64.rpm 1.9 MB/s | 303 kB 00:00 2026-03-10T13:34:08.656 INFO:teuthology.orchestra.run.vm09.stdout:(86/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 625 kB/s | 64 kB 00:00 2026-03-10T13:34:08.704 INFO:teuthology.orchestra.run.vm09.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 218 kB/s | 22 kB 00:00 2026-03-10T13:34:08.717 INFO:teuthology.orchestra.run.vm09.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 41 MB/s | 551 kB 00:00 2026-03-10T13:34:08.724 INFO:teuthology.orchestra.run.vm09.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 43 MB/s | 308 kB 00:00 2026-03-10T13:34:08.726 INFO:teuthology.orchestra.run.vm09.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 9.0 MB/s | 19 kB 00:00 2026-03-10T13:34:08.842 INFO:teuthology.orchestra.run.vm09.stdout:(91/136): libarrow-9.0.0-15.el9.x86_64.rpm 38 MB/s | 4.4 MB 00:00 2026-03-10T13:34:08.849 INFO:teuthology.orchestra.run.vm09.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 4.1 MB/s | 25 kB 00:00 2026-03-10T13:34:08.855 INFO:teuthology.orchestra.run.vm09.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 8.1 MB/s | 49 kB 00:00 2026-03-10T13:34:08.862 INFO:teuthology.orchestra.run.vm09.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 8.9 MB/s | 67 kB 00:00 2026-03-10T13:34:08.867 INFO:teuthology.orchestra.run.vm09.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 37 MB/s | 151 kB 00:00 2026-03-10T13:34:08.910 INFO:teuthology.orchestra.run.vm09.stdout:(96/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 19 MB/s | 838 kB 00:00 2026-03-10T13:34:08.932 INFO:teuthology.orchestra.run.vm09.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 25 MB/s | 548 kB 00:00 2026-03-10T13:34:08.934 INFO:teuthology.orchestra.run.vm09.stdout:(98/136): python3-autocommand-2.2.2-8.el9.noarc 11 MB/s | 29 kB 00:00 2026-03-10T13:34:08.938 INFO:teuthology.orchestra.run.vm09.stdout:(99/136): python3-backports-tarfile-1.2.0-1.el9 17 MB/s | 60 kB 00:00 2026-03-10T13:34:08.941 INFO:teuthology.orchestra.run.vm09.stdout:(100/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 14 MB/s | 43 kB 00:00 2026-03-10T13:34:08.946 INFO:teuthology.orchestra.run.vm09.stdout:(101/136): python3-cachetools-4.2.4-1.el9.noarc 6.4 MB/s | 32 kB 00:00 2026-03-10T13:34:08.952 INFO:teuthology.orchestra.run.vm09.stdout:(102/136): python3-certifi-2023.05.07-4.el9.noa 2.3 MB/s | 14 kB 00:00 2026-03-10T13:34:08.959 INFO:teuthology.orchestra.run.vm09.stdout:(103/136): python3-cheroot-10.0.1-4.el9.noarch. 27 MB/s | 173 kB 00:00 2026-03-10T13:34:08.969 INFO:teuthology.orchestra.run.vm09.stdout:(104/136): python3-cherrypy-18.6.1-2.el9.noarch 37 MB/s | 358 kB 00:00 2026-03-10T13:34:08.981 INFO:teuthology.orchestra.run.vm09.stdout:(105/136): python3-google-auth-2.45.0-1.el9.noa 20 MB/s | 254 kB 00:00 2026-03-10T13:34:09.039 INFO:teuthology.orchestra.run.vm09.stdout:(106/136): python3-grpcio-1.46.7-10.el9.x86_64. 35 MB/s | 2.0 MB 00:00 2026-03-10T13:34:09.044 INFO:teuthology.orchestra.run.vm09.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 31 MB/s | 144 kB 00:00 2026-03-10T13:34:09.046 INFO:teuthology.orchestra.run.vm09.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 5.6 MB/s | 11 kB 00:00 2026-03-10T13:34:09.048 INFO:teuthology.orchestra.run.vm09.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 8.5 MB/s | 18 kB 00:00 2026-03-10T13:34:09.050 INFO:teuthology.orchestra.run.vm09.stdout:(110/136): python3-jaraco-collections-3.0.0-8.e 10 MB/s | 23 kB 00:00 2026-03-10T13:34:09.053 INFO:teuthology.orchestra.run.vm09.stdout:(111/136): python3-jaraco-context-6.0.1-3.el9.n 7.3 MB/s | 20 kB 00:00 2026-03-10T13:34:09.055 INFO:teuthology.orchestra.run.vm09.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 9.3 MB/s | 19 kB 00:00 2026-03-10T13:34:09.058 INFO:teuthology.orchestra.run.vm09.stdout:(113/136): python3-jaraco-text-4.0.0-2.el9.noar 13 MB/s | 26 kB 00:00 2026-03-10T13:34:09.076 INFO:teuthology.orchestra.run.vm09.stdout:(114/136): protobuf-compiler-3.14.0-17.el9.x86_ 2.0 MB/s | 862 kB 00:00 2026-03-10T13:34:09.080 INFO:teuthology.orchestra.run.vm09.stdout:(115/136): python3-kubernetes-26.1.0-3.el9.noar 46 MB/s | 1.0 MB 00:00 2026-03-10T13:34:09.082 INFO:teuthology.orchestra.run.vm09.stdout:(116/136): python3-logutils-0.3.5-21.el9.noarch 8.0 MB/s | 46 kB 00:00 2026-03-10T13:34:09.083 INFO:teuthology.orchestra.run.vm09.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 29 MB/s | 79 kB 00:00 2026-03-10T13:34:09.084 INFO:teuthology.orchestra.run.vm09.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 21 MB/s | 58 kB 00:00 2026-03-10T13:34:09.087 INFO:teuthology.orchestra.run.vm09.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 7.9 MB/s | 16 kB 00:00 2026-03-10T13:34:09.090 INFO:teuthology.orchestra.run.vm09.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 42 MB/s | 272 kB 00:00 2026-03-10T13:34:09.092 INFO:teuthology.orchestra.run.vm09.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 19 MB/s | 90 kB 00:00 2026-03-10T13:34:09.092 INFO:teuthology.orchestra.run.vm09.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 14 MB/s | 31 kB 00:00 2026-03-10T13:34:09.096 INFO:teuthology.orchestra.run.vm09.stdout:(123/136): python3-rsa-4.9-2.el9.noarch.rpm 18 MB/s | 59 kB 00:00 2026-03-10T13:34:09.097 INFO:teuthology.orchestra.run.vm09.stdout:(124/136): python3-routes-2.5.1-5.el9.noarch.rp 35 MB/s | 188 kB 00:00 2026-03-10T13:34:09.100 INFO:teuthology.orchestra.run.vm09.stdout:(125/136): python3-typing-extensions-4.15.0-1.e 30 MB/s | 86 kB 00:00 2026-03-10T13:34:09.101 INFO:teuthology.orchestra.run.vm09.stdout:(126/136): python3-tempora-5.0.0-2.el9.noarch.r 7.1 MB/s | 36 kB 00:00 2026-03-10T13:34:09.106 INFO:teuthology.orchestra.run.vm09.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 45 MB/s | 230 kB 00:00 2026-03-10T13:34:09.108 INFO:teuthology.orchestra.run.vm09.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 14 MB/s | 90 kB 00:00 2026-03-10T13:34:09.113 INFO:teuthology.orchestra.run.vm09.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 4.4 MB/s | 22 kB 00:00 2026-03-10T13:34:09.119 INFO:teuthology.orchestra.run.vm09.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 33 MB/s | 427 kB 00:00 2026-03-10T13:34:09.119 INFO:teuthology.orchestra.run.vm09.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 3.2 MB/s | 20 kB 00:00 2026-03-10T13:34:09.128 INFO:teuthology.orchestra.run.vm09.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 22 MB/s | 191 kB 00:00 2026-03-10T13:34:09.195 INFO:teuthology.orchestra.run.vm09.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 21 MB/s | 1.6 MB 00:00 2026-03-10T13:34:09.400 INFO:teuthology.orchestra.run.vm09.stdout:(134/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 15 MB/s | 19 MB 00:01 2026-03-10T13:34:09.464 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T13:34:09.464 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T13:34:10.143 INFO:teuthology.orchestra.run.vm09.stdout:(135/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 3.2 MB 00:00 2026-03-10T13:34:10.191 INFO:teuthology.orchestra.run.vm09.stdout:(136/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.2 MB/s | 3.4 MB 00:01 2026-03-10T13:34:10.194 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:34:10.194 INFO:teuthology.orchestra.run.vm09.stdout:Total 17 MB/s | 210 MB 00:12 2026-03-10T13:34:10.378 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T13:34:10.392 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T13:34:10.404 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T13:34:10.579 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T13:34:10.582 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:10.644 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:10.646 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:34:10.676 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:34:10.684 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:34:10.688 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T13:34:10.690 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T13:34:10.695 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T13:34:10.706 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T13:34:10.707 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:10.741 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:10.744 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:34:10.760 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:34:10.800 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T13:34:10.805 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T13:34:10.840 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T13:34:10.845 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T13:34:10.854 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T13:34:10.855 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T13:34:10.870 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T13:34:10.883 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T13:34:10.891 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T13:34:10.902 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T13:34:10.909 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T13:34:10.914 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T13:34:10.919 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T13:34:10.948 INFO:teuthology.orchestra.run.vm05.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T13:34:10.964 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T13:34:10.969 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T13:34:10.976 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T13:34:10.978 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T13:34:11.013 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T13:34:11.020 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T13:34:11.030 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T13:34:11.052 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T13:34:11.063 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T13:34:11.094 INFO:teuthology.orchestra.run.vm05.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T13:34:11.100 INFO:teuthology.orchestra.run.vm05.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T13:34:11.108 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T13:34:11.139 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T13:34:11.200 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T13:34:11.217 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T13:34:11.225 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T13:34:11.235 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T13:34:11.241 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T13:34:11.247 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T13:34:11.265 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T13:34:11.291 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T13:34:11.297 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T13:34:11.304 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T13:34:11.321 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T13:34:11.334 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T13:34:11.346 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T13:34:11.407 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T13:34:11.416 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T13:34:11.425 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T13:34:11.476 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T13:34:11.684 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T13:34:11.684 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T13:34:11.855 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T13:34:11.872 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T13:34:11.877 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T13:34:11.884 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T13:34:11.889 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T13:34:11.897 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T13:34:11.900 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T13:34:11.902 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T13:34:11.933 INFO:teuthology.orchestra.run.vm05.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T13:34:11.985 INFO:teuthology.orchestra.run.vm05.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T13:34:11.998 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T13:34:12.006 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T13:34:12.011 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T13:34:12.018 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T13:34:12.023 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T13:34:12.033 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T13:34:12.039 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T13:34:12.074 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T13:34:12.087 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T13:34:12.129 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T13:34:12.400 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T13:34:12.432 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T13:34:12.440 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T13:34:12.503 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T13:34:12.506 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T13:34:12.532 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T13:34:12.616 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T13:34:12.631 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T13:34:12.644 INFO:teuthology.orchestra.run.vm09.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T13:34:12.813 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T13:34:12.815 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:12.880 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:12.882 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:34:12.912 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T13:34:12.922 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:34:12.926 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T13:34:12.929 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T13:34:12.935 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T13:34:12.938 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T13:34:12.945 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T13:34:12.949 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:12.984 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:12.986 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:34:13.000 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T13:34:13.032 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T13:34:13.034 INFO:teuthology.orchestra.run.vm09.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T13:34:13.077 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T13:34:13.083 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T13:34:13.108 INFO:teuthology.orchestra.run.vm09.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T13:34:13.122 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T13:34:13.130 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T13:34:13.140 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T13:34:13.147 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T13:34:13.151 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T13:34:13.157 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T13:34:13.188 INFO:teuthology.orchestra.run.vm09.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T13:34:13.207 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T13:34:13.212 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T13:34:13.219 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T13:34:13.221 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T13:34:13.253 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T13:34:13.259 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T13:34:13.270 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T13:34:13.284 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T13:34:13.292 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T13:34:13.322 INFO:teuthology.orchestra.run.vm09.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T13:34:13.327 INFO:teuthology.orchestra.run.vm09.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T13:34:13.335 INFO:teuthology.orchestra.run.vm09.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T13:34:13.366 INFO:teuthology.orchestra.run.vm09.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T13:34:13.426 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T13:34:13.444 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T13:34:13.452 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T13:34:13.462 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T13:34:13.469 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T13:34:13.474 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T13:34:13.491 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T13:34:13.517 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T13:34:13.524 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T13:34:13.532 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T13:34:13.545 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T13:34:13.558 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T13:34:13.570 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T13:34:13.633 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T13:34:13.641 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T13:34:13.650 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T13:34:13.697 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T13:34:13.841 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T13:34:13.872 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T13:34:13.879 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T13:34:13.885 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T13:34:14.041 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T13:34:14.045 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:34:14.064 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T13:34:14.078 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:34:14.080 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T13:34:14.082 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T13:34:14.086 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T13:34:14.091 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T13:34:14.094 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T13:34:14.099 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T13:34:14.106 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T13:34:14.110 INFO:teuthology.orchestra.run.vm09.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T13:34:14.111 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T13:34:14.142 INFO:teuthology.orchestra.run.vm09.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T13:34:14.191 INFO:teuthology.orchestra.run.vm09.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T13:34:14.208 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T13:34:14.215 INFO:teuthology.orchestra.run.vm09.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T13:34:14.220 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T13:34:14.228 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T13:34:14.233 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T13:34:14.243 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T13:34:14.247 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T13:34:14.282 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T13:34:14.295 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T13:34:14.339 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T13:34:14.344 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T13:34:14.347 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:34:14.367 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:34:14.371 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T13:34:14.618 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T13:34:14.648 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T13:34:14.655 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T13:34:14.715 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T13:34:14.718 INFO:teuthology.orchestra.run.vm09.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T13:34:14.742 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T13:34:15.110 INFO:teuthology.orchestra.run.vm09.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T13:34:15.197 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T13:34:15.539 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:15.545 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:15.567 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:15.585 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T13:34:15.606 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T13:34:15.699 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T13:34:15.716 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T13:34:15.746 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T13:34:15.785 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T13:34:15.849 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T13:34:15.962 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T13:34:16.039 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T13:34:16.044 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T13:34:16.052 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T13:34:16.052 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:34:16.058 INFO:teuthology.orchestra.run.vm09.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T13:34:16.059 INFO:teuthology.orchestra.run.vm05.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T13:34:16.065 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T13:34:16.068 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:34:16.089 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:34:16.216 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T13:34:16.220 INFO:teuthology.orchestra.run.vm09.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:34:16.252 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T13:34:16.256 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T13:34:16.263 INFO:teuthology.orchestra.run.vm09.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T13:34:16.414 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T13:34:16.421 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:34:16.465 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:34:16.465 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T13:34:16.465 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T13:34:16.465 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:16.471 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:34:16.533 INFO:teuthology.orchestra.run.vm09.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T13:34:16.536 INFO:teuthology.orchestra.run.vm09.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:34:16.557 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T13:34:16.559 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T13:34:17.685 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:17.691 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:17.715 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T13:34:17.733 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T13:34:17.754 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T13:34:17.847 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T13:34:17.862 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T13:34:17.891 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T13:34:17.934 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T13:34:17.998 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T13:34:18.009 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T13:34:18.015 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:34:18.023 INFO:teuthology.orchestra.run.vm09.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T13:34:18.027 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T13:34:18.029 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:34:18.048 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T13:34:18.361 INFO:teuthology.orchestra.run.vm09.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T13:34:18.420 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:34:18.454 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T13:34:18.454 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T13:34:18.455 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T13:34:18.455 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:18.459 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-10T13:34:23.160 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:23.283 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:34:23.309 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:34:23.310 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:23.310 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:34:23.310 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:34:23.310 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:34:23.310 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:23.555 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:34:23.580 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:23.590 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T13:34:23.593 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T13:34:23.612 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:23.612 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'qat' with GID 994. 2026-03-10T13:34:23.613 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T13:34:23.613 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T13:34:23.613 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:23.623 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:23.652 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:23.652 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T13:34:23.652 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:23.836 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T13:34:24.024 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T13:34:24.184 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:34:24.197 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:34:24.197 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:24.197 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:34:24.197 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:24.940 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:34:24.965 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T13:34:25.012 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.030 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:34:25.033 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:34:25.040 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T13:34:25.062 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T13:34:25.065 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:34:25.135 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T13:34:25.160 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.389 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T13:34:25.408 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.416 INFO:teuthology.orchestra.run.vm09.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T13:34:25.419 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T13:34:25.436 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:25.436 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'qat' with GID 994. 2026-03-10T13:34:25.436 INFO:teuthology.orchestra.run.vm09.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T13:34:25.436 INFO:teuthology.orchestra.run.vm09.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T13:34:25.436 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.446 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:25.473 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T13:34:25.473 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T13:34:25.473 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.513 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T13:34:25.584 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T13:34:25.589 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:34:25.598 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:34:25.602 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T13:34:25.602 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:25.602 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T13:34:25.602 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:25.605 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:34:26.098 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:34:26.100 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:34:26.161 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:34:26.216 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T13:34:26.219 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:34:26.242 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:34:26.242 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:26.242 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:34:26.242 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:34:26.243 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:34:26.243 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:26.257 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:34:26.270 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:34:26.355 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:34:26.377 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T13:34:26.378 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:26.378 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T13:34:26.378 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:34:26.378 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T13:34:26.378 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:26.433 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:34:26.437 INFO:teuthology.orchestra.run.vm09.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T13:34:26.443 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T13:34:26.464 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T13:34:26.468 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:34:26.764 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T13:34:26.768 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:34:26.791 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:34:26.792 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:26.792 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:34:26.792 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:34:26.792 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:34:26.792 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:26.804 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:34:26.826 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:34:26.826 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:26.826 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:34:26.826 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:26.983 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:34:26.991 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T13:34:26.998 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:34:27.007 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:27.504 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T13:34:27.506 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:34:27.566 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T13:34:27.625 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T13:34:27.628 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T13:34:27.649 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:27.662 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:34:27.674 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T13:34:28.173 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T13:34:28.178 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:34:28.202 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:28.214 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:34:28.236 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T13:34:28.236 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:28.236 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T13:34:28.236 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:28.388 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T13:34:28.410 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:29.544 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T13:34:29.555 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T13:34:29.561 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T13:34:29.618 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T13:34:29.628 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:34:29.632 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T13:34:29.632 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:34:29.651 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:34:29.651 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:31.025 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T13:34:31.037 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T13:34:31.042 INFO:teuthology.orchestra.run.vm09.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T13:34:31.065 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T13:34:31.066 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T13:34:31.069 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T13:34:31.070 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T13:34:31.071 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T13:34:31.101 INFO:teuthology.orchestra.run.vm09.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T13:34:31.111 INFO:teuthology.orchestra.run.vm09.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:34:31.116 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T13:34:31.116 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:34:31.133 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T13:34:31.133 INFO:teuthology.orchestra.run.vm09.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:31.173 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:31.173 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:31.173 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:34:31.174 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:34:31.175 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:34:31.176 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T13:34:31.269 DEBUG:teuthology.parallel:result is None 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T13:34:32.494 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T13:34:32.495 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T13:34:32.495 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T13:34:32.495 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T13:34:32.495 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T13:34:32.496 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T13:34:32.497 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T13:34:32.498 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T13:34:32.499 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout:Upgraded: 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T13:34:32.600 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T13:34:32.601 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:34:32.602 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T13:34:32.698 DEBUG:teuthology.parallel:result is None 2026-03-10T13:34:32.698 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:34:33.339 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T13:34:33.359 INFO:teuthology.orchestra.run.vm05.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T13:34:33.359 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T13:34:33.360 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T13:34:33.360 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:34:33.947 DEBUG:teuthology.orchestra.run.vm09:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T13:34:33.966 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T13:34:33.966 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T13:34:33.966 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T13:34:33.967 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T13:34:33.967 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:34:33.967 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:34:33.999 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:34:33.999 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:34:34.035 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T13:34:34.035 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:34:34.035 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:34:34.064 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:34:34.129 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:34:34.129 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:34:34.153 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:34:34.216 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T13:34:34.216 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:34:34.216 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:34:34.240 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:34:34.304 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:34:34.304 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:34:34.328 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:34:34.391 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T13:34:34.391 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:34:34.391 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:34:34.414 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:34:34.479 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:34:34.480 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:34:34.502 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:34:34.565 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms bind msgr2': False, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'mon_bind_msgr2': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Cluster fsid is e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '[v1:192.168.123.105:6789]', 'mon.c': '[v1:192.168.123.105:6790]', 'mon.b': '[v1:192.168.123.109:6789]'} 2026-03-10T13:34:34.606 INFO:tasks.cephadm:First mon is mon.a on vm05 2026-03-10T13:34:34.606 INFO:tasks.cephadm:First mgr is y 2026-03-10T13:34:34.606 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T13:34:34.606 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-10T13:34:34.629 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T13:34:34.655 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T13:34:34.655 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:34:34.671 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:34:34.828 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:34:34.847 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T13:35:17.580 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T13:35:17.724 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T13:35:17.746 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-10T13:35:17.770 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T13:35:17.797 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-10T13:35:17.835 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T13:35:17.863 INFO:tasks.cephadm:Writing seed config... 2026-03-10T13:35:17.863 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T13:35:17.863 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T13:35:17.863 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [global] ms bind msgr2 = False 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T13:35:17.864 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T13:35:17.864 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:35:17.864 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T13:35:17.889 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = e063dc72-1c85-11f1-a098-09993c5c5b66 mon election default strategy = 1 ms bind msgr2 = False ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-10T13:35:17.889 DEBUG:teuthology.orchestra.run.vm05:mon.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service 2026-03-10T13:35:17.931 DEBUG:teuthology.orchestra.run.vm05:mgr.y> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y.service 2026-03-10T13:35:17.973 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T13:35:17.973 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-addrv '[v1:192.168.123.105:6789]' --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:18.115 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:35:18.115 INFO:teuthology.orchestra.run.vm05.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'e063dc72-1c85-11f1-a098-09993c5c5b66', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-addrv', '[v1:192.168.123.105:6789]', '--skip-admin-label'] 2026-03-10T13:35:18.115 INFO:teuthology.orchestra.run.vm05.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T13:35:18.116 INFO:teuthology.orchestra.run.vm05.stdout:Verifying podman|docker is present... 2026-03-10T13:35:18.136 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:35:18.136 INFO:teuthology.orchestra.run.vm05.stdout:Verifying lvm2 is present... 2026-03-10T13:35:18.136 INFO:teuthology.orchestra.run.vm05.stdout:Verifying time synchronization is in place... 2026-03-10T13:35:18.143 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:35:18.143 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:35:18.149 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:35:18.149 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T13:35:18.154 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-10T13:35:18.160 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-10T13:35:18.160 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:35:18.160 INFO:teuthology.orchestra.run.vm05.stdout:Repeating the final host check... 2026-03-10T13:35:18.178 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 5.8.0 2026-03-10T13:35:18.179 INFO:teuthology.orchestra.run.vm05.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T13:35:18.179 INFO:teuthology.orchestra.run.vm05.stdout:systemctl is present 2026-03-10T13:35:18.179 INFO:teuthology.orchestra.run.vm05.stdout:lvcreate is present 2026-03-10T13:35:18.184 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:35:18.185 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:35:18.191 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:35:18.191 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout inactive 2026-03-10T13:35:18.198 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout enabled 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stdout active 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:Unit chronyd.service is enabled and running 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:Host looks OK 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:Cluster fsid: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:Acquiring lock 139787173287728 on /run/cephadm/e063dc72-1c85-11f1-a098-09993c5c5b66.lock 2026-03-10T13:35:18.203 INFO:teuthology.orchestra.run.vm05.stdout:Lock 139787173287728 acquired on /run/cephadm/e063dc72-1c85-11f1-a098-09993c5c5b66.lock 2026-03-10T13:35:18.204 INFO:teuthology.orchestra.run.vm05.stdout:Verifying IP 192.168.123.105 port 6789 ... 2026-03-10T13:35:18.204 INFO:teuthology.orchestra.run.vm05.stdout:Base mon IP(s) is [192.168.123.105:6789], mon addrv is [v1:192.168.123.105:6789] 2026-03-10T13:35:18.207 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.105 metric 100 2026-03-10T13:35:18.207 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.105 metric 100 2026-03-10T13:35:18.209 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T13:35:18.209 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:5/64 scope link noprefixroute 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-10T13:35:18.211 INFO:teuthology.orchestra.run.vm05.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24'] 2026-03-10T13:35:18.212 INFO:teuthology.orchestra.run.vm05.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T13:35:18.212 INFO:teuthology.orchestra.run.vm05.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T13:35:19.555 INFO:teuthology.orchestra.run.vm05.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T13:35:19.851 INFO:teuthology.orchestra.run.vm05.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:35:19.852 INFO:teuthology.orchestra.run.vm05.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:35:19.852 INFO:teuthology.orchestra.run.vm05.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T13:35:20.115 INFO:teuthology.orchestra.run.vm05.stdout:stat: stdout 167 167 2026-03-10T13:35:20.116 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial keys... 2026-03-10T13:35:20.402 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQAYHrBpyFr0DhAA3wmM4xGimW94c/CTigGuBw== 2026-03-10T13:35:20.725 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQAYHrBpXOqWHxAA9D85v4TtmEY+fnY3ZCTTjA== 2026-03-10T13:35:21.017 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph-authtool: stdout AQAYHrBp6XFLMxAAEiMR+w9k0DlaKiRp8Ech2A== 2026-03-10T13:35:21.017 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial monmap... 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:monmaptool for a [v1:192.168.123.105:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:setting min_mon_release = quincy 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: set fsid to e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:21.309 INFO:teuthology.orchestra.run.vm05.stdout:Creating mon... 2026-03-10T13:35:21.590 INFO:teuthology.orchestra.run.vm05.stdout:create mon.a on 2026-03-10T13:35:21.760 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T13:35:21.947 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T13:35:22.146 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target → /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target. 2026-03-10T13:35:22.146 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target → /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target. 2026-03-10T13:35:22.351 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a 2026-03-10T13:35:22.351 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service: Unit ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service not loaded. 2026-03-10T13:35:22.485 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target.wants/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service → /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@.service. 2026-03-10T13:35:23.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:23 vm05 podman[51089]: 2026-03-10 13:35:23.047433218 +0000 UTC m=+0.446220243 container create 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T13:35:23.967 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T13:35:23.967 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:35:23.967 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon to start... 2026-03-10T13:35:23.967 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mon... 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:23 vm05 podman[51089]: 2026-03-10 13:35:23.944291445 +0000 UTC m=+1.343078470 container init 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:23 vm05 podman[51089]: 2026-03-10 13:35:23.949311807 +0000 UTC m=+1.348098832 container start 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:23 vm05 bash[51089]: 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:23 vm05 systemd[1]: Started Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:24 vm05 ceph-mon[51125]: mkfs e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:24.311 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:24 vm05 ceph-mon[51125]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout id: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout services: 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.177769s) 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout data: 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.136+0000 7f0550ac0640 1 Processor -- start 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.137+0000 7f0550ac0640 1 -- start start 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.138+0000 7f0550ac0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f054c10cbe0 con 0x7f054c1087b0 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.138+0000 7f054b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f054c1087b0 0x7f054c108bb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48310/0 (socket says 192.168.123.105:48310) 2026-03-10T13:35:24.331 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.138+0000 7f054b7fe640 1 -- 192.168.123.105:0/2929954046 learned_addr learned my addr 192.168.123.105:0/2929954046 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3320136606 0 0) 0x7f054c10cbe0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f052c003620 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 4098535593 0 0) 0x7f052c003620 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f054c10ddc0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0530002e10 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.139+0000 7f054a7fc640 1 -- 192.168.123.105:0/2929954046 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f05300030e0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.140+0000 7f0550ac0640 1 -- 192.168.123.105:0/2929954046 >> v1:192.168.123.105:6789/0 conn(0x7f054c1087b0 legacy=0x7f054c108bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.140+0000 7f0550ac0640 1 -- 192.168.123.105:0/2929954046 shutdown_connections 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.140+0000 7f0550ac0640 1 -- 192.168.123.105:0/2929954046 >> 192.168.123.105:0/2929954046 conn(0x7f054c07bc90 msgr2=0x7f054c07c0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.140+0000 7f0550ac0640 1 -- 192.168.123.105:0/2929954046 shutdown_connections 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.140+0000 7f0550ac0640 1 -- 192.168.123.105:0/2929954046 wait complete. 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.141+0000 7f0550ac0640 1 Processor -- start 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.141+0000 7f0550ac0640 1 -- start start 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.141+0000 7f0550ac0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f054c19e190 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.141+0000 7f054b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f054c1087b0 0x7f054c19da80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48318/0 (socket says 192.168.123.105:48318) 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.141+0000 7f054b7fe640 1 -- 192.168.123.105:0/1835639079 learned_addr learned my addr 192.168.123.105:0/1835639079 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1537050393 0 0) 0x7f054c19e190 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0520003620 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1348501357 0 0) 0x7f0520003620 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f054c19e190 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0530004a40 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3625252620 0 0) 0x7f054c19e190 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.142+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f054c19e360 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f054c19e670 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f0530004e90 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0530005150 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f054c1a21b0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f0530005680 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.143+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f05300067d0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.144+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0510005180 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.146+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f0530003310 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.180+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status"} v 0) -- 0x7f0510005d40 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.181+0000 7f0548ff9640 1 -- 192.168.123.105:0/1835639079 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+317 (unknown 1155462804 0 2611184152) 0x7f05300069d0 con 0x7f054c1087b0 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.181+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 >> v1:192.168.123.105:6789/0 conn(0x7f054c1087b0 legacy=0x7f054c19da80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.182+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 shutdown_connections 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.182+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 >> 192.168.123.105:0/1835639079 conn(0x7f054c07bc90 msgr2=0x7f054c107780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.182+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 shutdown_connections 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.182+0000 7f0550ac0640 1 -- 192.168.123.105:0/1835639079 wait complete. 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:mon is available 2026-03-10T13:35:24.332 INFO:teuthology.orchestra.run.vm05.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.105:6789] 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:35:24.672 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.475+0000 7f1011f47640 1 Processor -- start 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.476+0000 7f1011f47640 1 -- start start 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.476+0000 7f1011f47640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f100c07f680 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.476+0000 7f100b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f100c07d1f0 0x7f100c07d5f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48330/0 (socket says 192.168.123.105:48330) 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.476+0000 7f100b7fe640 1 -- 192.168.123.105:0/2570101384 learned_addr learned my addr 192.168.123.105:0/2570101384 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.477+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4289391830 0 0) 0x7f100c07f680 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.477+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0fe8003620 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.477+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3975712633 0 0) 0x7f0fe8003620 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.477+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f100c080860 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0ffc002e10 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f100a7fc640 1 -- 192.168.123.105:0/2570101384 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f0ffc0030e0 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f1011f47640 1 -- 192.168.123.105:0/2570101384 >> v1:192.168.123.105:6789/0 conn(0x7f100c07d1f0 legacy=0x7f100c07d5f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f1011f47640 1 -- 192.168.123.105:0/2570101384 shutdown_connections 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f1011f47640 1 -- 192.168.123.105:0/2570101384 >> 192.168.123.105:0/2570101384 conn(0x7f100c07be80 msgr2=0x7f100c07c2d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.478+0000 7f1011f47640 1 -- 192.168.123.105:0/2570101384 shutdown_connections 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.479+0000 7f1011f47640 1 -- 192.168.123.105:0/2570101384 wait complete. 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.479+0000 7f1011f47640 1 Processor -- start 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.479+0000 7f1011f47640 1 -- start start 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.479+0000 7f1011f47640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f100c1a2820 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f100b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f100c07d1f0 0x7f100c1a2110 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48346/0 (socket says 192.168.123.105:48346) 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f100b7fe640 1 -- 192.168.123.105:0/3812143385 learned_addr learned my addr 192.168.123.105:0/3812143385 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1170690566 0 0) 0x7f100c1a2820 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0fe0003620 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1795688447 0 0) 0x7f0fe0003620 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f100c1a2820 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.480+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0ffc004a40 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 856263842 0 0) 0x7f100c1a2820 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f100c1a29f0 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(0 keys) ==== 4+0+0 (unknown 0 0 0) 0x7f0ffc004e90 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f100c1a2d00 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0ffc005150 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.481+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f100c1a6810 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.482+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f0ffc005680 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.482+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f0ffc003350 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.482+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0fd0005180 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.484+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f0ffc0059a0 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.517+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f0fd0003c00 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.524+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+356 (unknown 1213389831 0 1261977095) 0x7f0ffc005b80 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.525+0000 7f1008ff9640 1 -- 192.168.123.105:0/3812143385 <== mon.0 v1:192.168.123.105:6789/0 11 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f0ffc006190 con 0x7f100c07d1f0 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.526+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 >> v1:192.168.123.105:6789/0 conn(0x7f100c07d1f0 legacy=0x7f100c1a2110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.526+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 shutdown_connections 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.526+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 >> 192.168.123.105:0/3812143385 conn(0x7f100c07be80 msgr2=0x7f100c105e20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.526+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 shutdown_connections 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.527+0000 7f1011f47640 1 -- 192.168.123.105:0/3812143385 wait complete. 2026-03-10T13:35:24.673 INFO:teuthology.orchestra.run.vm05.stdout:Generating new minimal ceph.conf... 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.802+0000 7f4714777640 1 Processor -- start 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f4714777640 1 -- start start 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f4714777640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f470c10cd80 con 0x7f470c108950 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f47124ec640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f470c108950 0x7f470c108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48354/0 (socket says 192.168.123.105:48354) 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f47124ec640 1 -- 192.168.123.105:0/2016039628 learned_addr learned my addr 192.168.123.105:0/2016039628 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1776330832 0 0) 0x7f470c10cd80 con 0x7f470c108950 2026-03-10T13:35:24.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.803+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f46f0003620 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.804+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 603544537 0 0) 0x7f46f0003620 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.804+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f470c10df60 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.804+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f46f4002e10 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.804+0000 7f47114ea640 1 -- 192.168.123.105:0/2016039628 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f46f40034a0 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.804+0000 7f4714777640 1 -- 192.168.123.105:0/2016039628 >> v1:192.168.123.105:6789/0 conn(0x7f470c108950 legacy=0x7f470c108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- 192.168.123.105:0/2016039628 shutdown_connections 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- 192.168.123.105:0/2016039628 >> 192.168.123.105:0/2016039628 conn(0x7f470c07bdf0 msgr2=0x7f470c07c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- 192.168.123.105:0/2016039628 shutdown_connections 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- 192.168.123.105:0/2016039628 wait complete. 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 Processor -- start 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- start start 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.805+0000 7f4714777640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f470c19ec60 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47124ec640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f470c108950 0x7f470c19e550 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48360/0 (socket says 192.168.123.105:48360) 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47124ec640 1 -- 192.168.123.105:0/2446548672 learned_addr learned my addr 192.168.123.105:0/2446548672 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 864663480 0 0) 0x7f470c19ec60 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f46e0003620 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 363742948 0 0) 0x7f46e0003620 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.806+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f470c19ec60 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.807+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f46f4003270 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.807+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1586689178 0 0) 0x7f470c19ec60 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.807+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f470c19ee30 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.807+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f470c19f140 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.807+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f470c1a2c50 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.808+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f46f4004fc0 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.808+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f46f40060f0 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.808+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f46f4007750 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.808+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f46f4006c70 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.809+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f470c10db20 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.810+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f46f40062b0 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.841+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f470c1a3260 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.842+0000 7f47037fe640 1 -- 192.168.123.105:0/2446548672 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+150 (unknown 1452402520 0 38669791) 0x7f46f4004b90 con 0x7f470c108950 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.844+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 >> v1:192.168.123.105:6789/0 conn(0x7f470c108950 legacy=0x7f470c19e550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.844+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 shutdown_connections 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.844+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 >> 192.168.123.105:0/2446548672 conn(0x7f470c07bdf0 msgr2=0x7f470c105810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.844+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 shutdown_connections 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:24.845+0000 7f4714777640 1 -- 192.168.123.105:0/2446548672 wait complete. 2026-03-10T13:35:24.997 INFO:teuthology.orchestra.run.vm05.stdout:Restarting the monitor... 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 systemd[1]: Stopping Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: monmap epoch 1 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: last_changed 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: min_mon_release 19 (squid) 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: election_strategy: 1 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: fsmap 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: mgrmap e1: no daemons active 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: from='client.? v1:192.168.123.105:0/1835639079' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: from='client.? v1:192.168.123.105:0/3812143385' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: from='client.? v1:192.168.123.105:0/3812143385' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51125]: from='client.? v1:192.168.123.105:0/2446548672' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a[51100]: 2026-03-10T13:35:25.081+0000 7f51b23de640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a[51100]: 2026-03-10T13:35:25.081+0000 7f51b23de640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:35:25.330 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51398]: 2026-03-10 13:35:25.217105352 +0000 UTC m=+0.149170108 container died 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-10T13:35:25.536 INFO:teuthology.orchestra.run.vm05.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51398]: 2026-03-10 13:35:25.33699001 +0000 UTC m=+0.269054755 container remove 06dbff191b3984a5aa14a475e4162761fb799cb10d3644af3b78d2129d4c5ca4 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 bash[51398]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service: Deactivated successfully. 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 systemd[1]: Stopped Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 systemd[1]: Starting Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51477]: 2026-03-10 13:35:25.488813044 +0000 UTC m=+0.017405494 container create 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51477]: 2026-03-10 13:35:25.525925428 +0000 UTC m=+0.054517878 container init 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223) 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51477]: 2026-03-10 13:35:25.530285082 +0000 UTC m=+0.058877532 container start 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 bash[51477]: 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 podman[51477]: 2026-03-10 13:35:25.482322991 +0000 UTC m=+0.010915450 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 systemd[1]: Started Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: pidfile_write: ignore empty --pid-file 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: load: jerasure load: lrc 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: RocksDB version: 7.9.2 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Git sha 0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: DB SUMMARY 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: DB Session ID: EOJMZ4AMPG5QQJWSG2TH 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: CURRENT file: CURRENT 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 88081 ; 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.error_if_exists: 0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.create_if_missing: 0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.paranoid_checks: 1 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.env: 0x5556e659adc0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.info_log: 0x5556e8a965c0 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:35:25.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.statistics: (nil) 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.use_fsync: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_log_file_size: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_fallocate: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.use_direct_reads: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.db_log_dir: 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.wal_dir: 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.write_buffer_manager: 0x5556e8a9b900 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.unordered_write: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.row_cache: None 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.wal_filter: None 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.two_write_queues: 0 2026-03-10T13:35:25.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.wal_compression: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.atomic_flush: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.log_readahead_size: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_background_jobs: 2 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_background_compactions: -1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_subcompactions: 1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_open_files: -1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_background_flushes: -1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Compression algorithms supported: 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kZSTD supported: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kXpressCompression supported: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kBZip2Compression supported: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kLZ4Compression supported: 1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kZlibCompression supported: 1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: kSnappyCompression supported: 1 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:35:25.586 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.merge_operator: 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_filter: None 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5556e8a965a0) 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: index_type: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: data_block_index_type: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: index_shortening: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: checksum: 4 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: no_block_cache: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache: 0x5556e8abb350 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_options: 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: capacity : 536870912 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: num_shard_bits : 4 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: strict_capacity_limit : 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_cache_compressed: (nil) 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: persistent_cache: (nil) 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_size: 4096 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_size_deviation: 10 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_restart_interval: 16 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: index_block_restart_interval: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: metadata_block_size: 4096 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: partition_filters: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: use_delta_encoding: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: filter_policy: bloomfilter 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: whole_key_filtering: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: verify_compression: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: format_version: 5 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: enable_index_compression: 1 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: block_align: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: max_auto_readahead_size: 262144 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: prepopulate_block_cache: 0 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-10T13:35:25.587 INFO:journalctl@ceph.mon.a.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression: NoCompression 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.num_levels: 7 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:35:25.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.inplace_update_support: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.bloom_locality: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.max_successive_merges: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.ttl: 2592000 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enable_blob_files: false 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.min_blob_size: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:35:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a9eb0dcd-45c2-4878-a069-b004c28b19a6 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149725555068, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149725556648, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 258, "table_properties": {"data_size": 82853, "index_size": 238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10837, "raw_average_key_size": 48, "raw_value_size": 76770, "raw_average_value_size": 341, "num_data_blocks": 10, "num_entries": 225, "num_filter_entries": 225, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773149725, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9eb0dcd-45c2-4878-a069-b004c28b19a6", "db_session_id": "EOJMZ4AMPG5QQJWSG2TH", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149725556712, "job": 1, "event": "recovery_finished"} 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5556e8abce00 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: DB pointer 0x5556e8bd2000 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: ** DB Stats ** 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: L0 2/0 84.54 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 61.9 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Sum 2/0 84.54 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 61.9 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 61.9 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 61.9 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T13:35:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: Cumulative compaction: 0.00 GB write, 7.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: Interval compaction: 0.00 GB write, 7.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: Block cache BinnedLRUCache@0x5556e8abb350#7 capacity: 512.00 MB usage: 26.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: Block cache entry stats(count,size,portion): DataBlock(3,25.48 KB,0.00486076%) FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.42 KB,8.04663e-05%) Misc(1,0.00 KB,0%) 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: starting mon.a rank 0 at public addrs v1:192.168.123.105:6789/0 at bind addrs v1:192.168.123.105:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???) e1 preinit fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mds e1 new map 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mds e1 print_map 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: e1 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: btime 2026-03-10T13:35:24:005627+0000 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: legacy client fscid: -1 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout: No filesystems configured 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mgr e0 loading version 1 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-10T13:35:25.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: monmap epoch 1 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: last_changed 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: min_mon_release 19 (squid) 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: election_strategy: 1 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: fsmap 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:35:25.857 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:25 vm05 ceph-mon[51512]: mgrmap e1: no daemons active 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.690+0000 7f9051d00640 1 Processor -- start 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.692+0000 7f9051d00640 1 -- start start 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.692+0000 7f9051d00640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f904c07a780 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.692+0000 7f904b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f904c104f00 0x7f904c107310 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48364/0 (socket says 192.168.123.105:48364) 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.692+0000 7f904b7fe640 1 -- 192.168.123.105:0/1963919543 learned_addr learned my addr 192.168.123.105:0/1963919543 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3160699910 0 0) 0x7f904c07a780 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9034003620 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2342393448 0 0) 0x7f9034003620 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f904c109a40 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9030002e10 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f90300033e0 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.693+0000 7f904a7fc640 1 -- 192.168.123.105:0/1963919543 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9030005780 con 0x7f904c104f00 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.694+0000 7f9051d00640 1 -- 192.168.123.105:0/1963919543 >> v1:192.168.123.105:6789/0 conn(0x7f904c104f00 legacy=0x7f904c107310 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.694+0000 7f9051d00640 1 -- 192.168.123.105:0/1963919543 shutdown_connections 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.694+0000 7f9051d00640 1 -- 192.168.123.105:0/1963919543 >> 192.168.123.105:0/1963919543 conn(0x7f904c100d50 msgr2=0x7f904c103170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.694+0000 7f9051d00640 1 -- 192.168.123.105:0/1963919543 shutdown_connections 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.694+0000 7f9051d00640 1 -- 192.168.123.105:0/1963919543 wait complete. 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.695+0000 7f9051d00640 1 Processor -- start 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.695+0000 7f9051d00640 1 -- start start 2026-03-10T13:35:25.869 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.695+0000 7f9051d00640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f904c1a3070 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.695+0000 7f904b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f904c104f00 0x7f904c1a2960 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48372/0 (socket says 192.168.123.105:48372) 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.695+0000 7f904b7fe640 1 -- 192.168.123.105:0/3692721789 learned_addr learned my addr 192.168.123.105:0/3692721789 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2233509822 0 0) 0x7f904c1a3070 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9024003620 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3787633718 0 0) 0x7f9024003620 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f904c1a3070 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9030002890 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1487299584 0 0) 0x7f904c1a3070 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.696+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f904c1a3240 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f904c1a3550 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f904c1a70e0 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f9030004bd0 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9030005f10 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f9030007570 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.697+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f9030006af0 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.698+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f904c1095d0 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.699+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f90300060d0 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.731+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7f904c19be80 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.734+0000 7f9048ff9640 1 -- 192.168.123.105:0/3692721789 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 127+0+0 (unknown 808082368 0 0) 0x7f9030006d40 con 0x7f904c104f00 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.735+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 >> v1:192.168.123.105:6789/0 conn(0x7f904c104f00 legacy=0x7f904c1a2960 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.735+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 shutdown_connections 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.735+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 >> 192.168.123.105:0/3692721789 conn(0x7f904c100d50 msgr2=0x7f904c102c40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.735+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 shutdown_connections 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:25.735+0000 7f9051d00640 1 -- 192.168.123.105:0/3692721789 wait complete. 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:25.870 INFO:teuthology.orchestra.run.vm05.stdout:Creating mgr... 2026-03-10T13:35:25.871 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T13:35:25.871 INFO:teuthology.orchestra.run.vm05.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T13:35:26.024 INFO:teuthology.orchestra.run.vm05.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y 2026-03-10T13:35:26.024 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y.service: Unit ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y.service not loaded. 2026-03-10T13:35:26.153 INFO:teuthology.orchestra.run.vm05.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66.target.wants/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y.service → /etc/systemd/system/ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@.service. 2026-03-10T13:35:26.171 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 systemd[1]: Starting Ceph mgr.y for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr to start... 2026-03-10T13:35:26.340 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr... 2026-03-10T13:35:26.425 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 podman[51734]: 2026-03-10 13:35:26.27082712 +0000 UTC m=+0.016999353 container create 7467828a73d7bb28ed474d6bf6e4eaeb531688302e4dda0b176565da140a28b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T13:35:26.425 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 podman[51734]: 2026-03-10 13:35:26.321263735 +0000 UTC m=+0.067435977 container init 7467828a73d7bb28ed474d6bf6e4eaeb531688302e4dda0b176565da140a28b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3) 2026-03-10T13:35:26.426 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 podman[51734]: 2026-03-10 13:35:26.328734064 +0000 UTC m=+0.074906295 container start 7467828a73d7bb28ed474d6bf6e4eaeb531688302e4dda0b176565da140a28b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-10T13:35:26.426 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 bash[51734]: 7467828a73d7bb28ed474d6bf6e4eaeb531688302e4dda0b176565da140a28b7 2026-03-10T13:35:26.426 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 podman[51734]: 2026-03-10 13:35:26.263434408 +0000 UTC m=+0.009606660 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:35:26.426 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 systemd[1]: Started Ceph mgr.y for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:35:26.690 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:26.442+0000 7f4edab0c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:35:26.690 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:26.486+0000 7f4edab0c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "e063dc72-1c85-11f1-a098-09993c5c5b66", 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:35:26.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:35:26.704 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:35:24:005627+0000", 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:35:24.006722+0000", 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.486+0000 7f882ec3d640 1 Processor -- start 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.486+0000 7f882ec3d640 1 -- start start 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.486+0000 7f882ec3d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8828074770 con 0x7f8828073bd0 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.487+0000 7f882dc3b640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8828073bd0 0x7f8828073fd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48404/0 (socket says 192.168.123.105:48404) 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.487+0000 7f882dc3b640 1 -- 192.168.123.105:0/1366953087 learned_addr learned my addr 192.168.123.105:0/1366953087 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:26.705 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1432014335 0 0) 0x7f8828074770 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8810003620 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 679906736 0 0) 0x7f8810003620 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f882807d060 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8818002a70 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.489+0000 7f882cc39640 1 -- 192.168.123.105:0/1366953087 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f8818003100 con 0x7f8828073bd0 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.490+0000 7f882ec3d640 1 -- 192.168.123.105:0/1366953087 >> v1:192.168.123.105:6789/0 conn(0x7f8828073bd0 legacy=0x7f8828073fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- 192.168.123.105:0/1366953087 shutdown_connections 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- 192.168.123.105:0/1366953087 >> 192.168.123.105:0/1366953087 conn(0x7f882806f4e0 msgr2=0x7f8828071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- 192.168.123.105:0/1366953087 shutdown_connections 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- 192.168.123.105:0/1366953087 wait complete. 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 Processor -- start 2026-03-10T13:35:26.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- start start 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.491+0000 7f882ec3d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f882807e5c0 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f882dc3b640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8828073bd0 0x7f882807deb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:48414/0 (socket says 192.168.123.105:48414) 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f882dc3b640 1 -- 192.168.123.105:0/3041849910 learned_addr learned my addr 192.168.123.105:0/3041849910 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1548979680 0 0) 0x7f882807e5c0 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f87f8003620 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 709066060 0 0) 0x7f87f8003620 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f882807e5c0 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8818004a20 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3404315055 0 0) 0x7f882807e5c0 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.492+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f882807e790 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.493+0000 7f882ec3d640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f882807ea40 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.493+0000 7f882ec3d640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8828082630 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.493+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f8818003000 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.493+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f88180057a0 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.493+0000 7f882ec3d640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f882807cc50 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.494+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f8818006340 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.494+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f8818007790 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.495+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f8818005f90 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.526+0000 7f882ec3d640 1 -- 192.168.123.105:0/3041849910 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f8828082920 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.527+0000 7f8816ffd640 1 -- 192.168.123.105:0/3041849910 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (unknown 4201413639 0 2875819377) 0x7f8818007a30 con 0x7f8828073bd0 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.530+0000 7f8814ff9640 1 -- 192.168.123.105:0/3041849910 >> v1:192.168.123.105:6789/0 conn(0x7f8828073bd0 legacy=0x7f882807deb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.530+0000 7f8814ff9640 1 -- 192.168.123.105:0/3041849910 shutdown_connections 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.530+0000 7f8814ff9640 1 -- 192.168.123.105:0/3041849910 >> 192.168.123.105:0/3041849910 conn(0x7f882806f4e0 msgr2=0x7f8828079610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.530+0000 7f8814ff9640 1 -- 192.168.123.105:0/3041849910 shutdown_connections 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:26.530+0000 7f8814ff9640 1 -- 192.168.123.105:0/3041849910 wait complete. 2026-03-10T13:35:26.707 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (1/15)... 2026-03-10T13:35:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3692721789' entity='client.admin' 2026-03-10T13:35:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3041849910' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:27.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:26 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:26.906+0000 7f4edab0c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:27.229+0000 7f4edab0c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: from numpy import show_config as show_numpy_config 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:27.314+0000 7f4edab0c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:27.351+0000 7f4edab0c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:35:27.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:27.419+0000 7f4edab0c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:35:28.173 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:27 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:27.911+0000 7f4edab0c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:35:28.173 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.021+0000 7f4edab0c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:28.173 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.061+0000 7f4edab0c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:35:28.173 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.095+0000 7f4edab0c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:28.173 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.135+0000 7f4edab0c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:35:28.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.172+0000 7f4edab0c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:35:28.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.335+0000 7f4edab0c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:35:28.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.382+0000 7f4edab0c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:28.855 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.587+0000 7f4edab0c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "e063dc72-1c85-11f1-a098-09993c5c5b66", 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:35:29.037 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:35:24:005627+0000", 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:35:24.006722+0000", 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.854+0000 7f6f8b577640 1 Processor -- start 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.854+0000 7f6f8b577640 1 -- start start 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.854+0000 7f6f8b577640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6f840a8ce0 con 0x7f6f840a48b0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.855+0000 7f6f8a575640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6f840a48b0 0x7f6f840a4cb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:55996/0 (socket says 192.168.123.105:55996) 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.855+0000 7f6f8a575640 1 -- 192.168.123.105:0/1422767589 learned_addr learned my addr 192.168.123.105:0/1422767589 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.855+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1412283720 0 0) 0x7f6f840a8ce0 con 0x7f6f840a48b0 2026-03-10T13:35:29.038 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.855+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6f6c003620 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.856+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3194534983 0 0) 0x7f6f6c003620 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.856+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6f840a9ec0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.856+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6f7c002e10 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.856+0000 7f6f89573640 1 -- 192.168.123.105:0/1422767589 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f6f7c0033e0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.857+0000 7f6f8b577640 1 -- 192.168.123.105:0/1422767589 >> v1:192.168.123.105:6789/0 conn(0x7f6f840a48b0 legacy=0x7f6f840a4cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- 192.168.123.105:0/1422767589 shutdown_connections 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- 192.168.123.105:0/1422767589 >> 192.168.123.105:0/1422767589 conn(0x7f6f8409fbc0 msgr2=0x7f6f840a2020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- 192.168.123.105:0/1422767589 shutdown_connections 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- 192.168.123.105:0/1422767589 wait complete. 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 Processor -- start 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- start start 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8b577640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6f8413a7d0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8a575640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6f840a48b0 0x7f6f8413a0c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56000/0 (socket says 192.168.123.105:56000) 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.858+0000 7f6f8a575640 1 -- 192.168.123.105:0/1491529870 learned_addr learned my addr 192.168.123.105:0/1491529870 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1874504203 0 0) 0x7f6f8413a7d0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6f64003620 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3129754020 0 0) 0x7f6f64003620 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6f8413a7d0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6f7c003170 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.859+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2694883761 0 0) 0x7f6f8413a7d0 con 0x7f6f840a48b0 2026-03-10T13:35:29.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.860+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6f8413a9a0 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.860+0000 7f6f8b577640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6f8413acb0 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.860+0000 7f6f8b577640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f6f8413e840 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.860+0000 7f6f8b577640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6f8413eb80 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.861+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f6f7c0029a0 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.861+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6f7c0058d0 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.861+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 1) ==== 811+0+0 (unknown 4133961934 0 0) 0x7f6f7c006690 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.861+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f6f7c007a50 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.862+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (unknown 1092875540 0 4127419540) 0x7f6f7c003410 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.895+0000 7f6f8b577640 1 -- 192.168.123.105:0/1491529870 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f6f840a9510 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.895+0000 7f6f7b7fe640 1 -- 192.168.123.105:0/1491529870 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (unknown 4201413639 0 2899719528) 0x7f6f7c005a90 con 0x7f6f840a48b0 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.897+0000 7f6f797fa640 1 -- 192.168.123.105:0/1491529870 >> v1:192.168.123.105:6789/0 conn(0x7f6f840a48b0 legacy=0x7f6f8413a0c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.898+0000 7f6f797fa640 1 -- 192.168.123.105:0/1491529870 shutdown_connections 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.898+0000 7f6f797fa640 1 -- 192.168.123.105:0/1491529870 >> 192.168.123.105:0/1491529870 conn(0x7f6f8409fbc0 msgr2=0x7f6f840a2000 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.898+0000 7f6f797fa640 1 -- 192.168.123.105:0/1491529870 shutdown_connections 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:28.898+0000 7f6f797fa640 1 -- 192.168.123.105:0/1491529870 wait complete. 2026-03-10T13:35:29.040 INFO:teuthology.orchestra.run.vm05.stdout:mgr not available, waiting (2/15)... 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1491529870' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.867+0000 7f4edab0c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.904+0000 7f4edab0c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:28 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:28.948+0000 7f4edab0c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.025+0000 7f4edab0c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:35:29.147 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.064+0000 7f4edab0c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:35:29.427 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.142+0000 7f4edab0c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:35:29.427 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.254+0000 7f4edab0c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:29.427 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.389+0000 7f4edab0c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:35:29.427 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:29.426+0000 7f4edab0c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: Activating manager daemon y 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: mgrmap e2: y(active, starting, since 0.00378319s) 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: Manager daemon y is now available 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:29 vm05 ceph-mon[51512]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsid": "e063dc72-1c85-11f1-a098-09993c5c5b66", 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.450 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:35:24:005627+0000", 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:31.451 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:35:24.006722+0000", 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.174+0000 7fb2bffff640 1 Processor -- start 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.175+0000 7fb2bffff640 1 -- start start 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.175+0000 7fb2bffff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb2c010a960 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.176+0000 7fb2beffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb2c0106530 0x7fb2c0106930 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56092/0 (socket says 192.168.123.105:56092) 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.176+0000 7fb2beffd640 1 -- 192.168.123.105:0/2061233384 learned_addr learned my addr 192.168.123.105:0/2061233384 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.176+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3507852058 0 0) 0x7fb2c010a960 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.176+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb2ac003620 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.177+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 299892081 0 0) 0x7fb2ac003620 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.177+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb2c010bb40 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.177+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb2a8002e10 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.177+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fb2a80033e0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.177+0000 7fb2bdffb640 1 -- 192.168.123.105:0/2061233384 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb2a8005780 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.178+0000 7fb2bffff640 1 -- 192.168.123.105:0/2061233384 >> v1:192.168.123.105:6789/0 conn(0x7fb2c0106530 legacy=0x7fb2c0106930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.178+0000 7fb2bffff640 1 -- 192.168.123.105:0/2061233384 shutdown_connections 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.178+0000 7fb2bffff640 1 -- 192.168.123.105:0/2061233384 >> 192.168.123.105:0/2061233384 conn(0x7fb2c0101d00 msgr2=0x7fb2c0104120 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.178+0000 7fb2bffff640 1 -- 192.168.123.105:0/2061233384 shutdown_connections 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.178+0000 7fb2bffff640 1 -- 192.168.123.105:0/2061233384 wait complete. 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb2bffff640 1 Processor -- start 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb2bffff640 1 -- start start 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb2bffff640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb2c019a4f0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb2beffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb2c0106530 0x7fb2c0199de0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56098/0 (socket says 192.168.123.105:56098) 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb2beffd640 1 -- 192.168.123.105:0/4181669056 learned_addr learned my addr 192.168.123.105:0/4181669056 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3467048924 0 0) 0x7fb2c019a4f0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.179+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb294003620 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2780839855 0 0) 0x7fb294003620 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb2c019a4f0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb2a8002890 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 999146289 0 0) 0x7fb2c019a4f0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb2c019a6c0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb2c019a9d0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fb2a8004bd0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.180+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb2a80061a0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.181+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb2c019e560 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.181+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 3) ==== 50095+0+0 (unknown 76144879 0 0) 0x7fb2a8012720 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.181+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7fb2a804d950 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.182+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb2c010b5b0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.185+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fb2a8018af0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.311+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7fb2c019ec30 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.312+0000 7fb29ffff640 1 -- 192.168.123.105:0/4181669056 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (unknown 4201413639 0 1404995384) 0x7fb2a80183f0 con 0x7fb2c0106530 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.315+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 >> v1:192.168.123.105:6800/1920070151 conn(0x7fb29403e980 legacy=0x7fb294040e40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.315+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 >> v1:192.168.123.105:6789/0 conn(0x7fb2c0106530 legacy=0x7fb2c0199de0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.315+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 shutdown_connections 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.315+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 >> 192.168.123.105:0/4181669056 conn(0x7fb2c0101d00 msgr2=0x7fb2c01040f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.316+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 shutdown_connections 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.316+0000 7fb2bffff640 1 -- 192.168.123.105:0/4181669056 wait complete. 2026-03-10T13:35:31.452 INFO:teuthology.orchestra.run.vm05.stdout:mgr is available 2026-03-10T13:35:31.567 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:31 vm05 ceph-mon[51512]: mgrmap e3: y(active, since 1.00882s) 2026-03-10T13:35:31.567 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4181669056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout fsid = e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_host = [v1:192.168.123.105:6789] 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:35:31.844 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.585+0000 7f0b6a22d640 1 Processor -- start 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.586+0000 7f0b6a22d640 1 -- start start 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.586+0000 7f0b6a22d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0b6410cd80 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.586+0000 7f0b637fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0b64108950 0x7f0b64108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56100/0 (socket says 192.168.123.105:56100) 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.586+0000 7f0b637fe640 1 -- 192.168.123.105:0/4127537266 learned_addr learned my addr 192.168.123.105:0/4127537266 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1256625411 0 0) 0x7f0b6410cd80 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0b48003620 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 502262745 0 0) 0x7f0b48003620 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0b6410df60 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0b54002e10 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f0b540033e0 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.587+0000 7f0b627fc640 1 -- 192.168.123.105:0/4127537266 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0b54005780 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- 192.168.123.105:0/4127537266 >> v1:192.168.123.105:6789/0 conn(0x7f0b64108950 legacy=0x7f0b64108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- 192.168.123.105:0/4127537266 shutdown_connections 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- 192.168.123.105:0/4127537266 >> 192.168.123.105:0/4127537266 conn(0x7f0b6407bdf0 msgr2=0x7f0b6407c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- 192.168.123.105:0/4127537266 shutdown_connections 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- 192.168.123.105:0/4127537266 wait complete. 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 Processor -- start 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.588+0000 7f0b6a22d640 1 -- start start 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b6a22d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0b6419ec20 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b637fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f0b64108950 0x7f0b6419e510 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56110/0 (socket says 192.168.123.105:56110) 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b637fe640 1 -- 192.168.123.105:0/3561067797 learned_addr learned my addr 192.168.123.105:0/3561067797 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 431374774 0 0) 0x7f0b6419ec20 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0b38003620 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 719077240 0 0) 0x7f0b38003620 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0b6419ec20 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.589+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0b54002890 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.590+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1389772377 0 0) 0x7f0b6419ec20 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.590+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0b6419edf0 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.590+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f0b6419f100 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.590+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f0b641a2c10 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.591+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f0b54004b90 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.591+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f0b54005d70 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.591+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 4) ==== 50201+0+0 (unknown 3810014055 0 0) 0x7f0b54012440 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.591+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0b6410db20 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.591+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f0b5404d2e0 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.594+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0b54018880 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.691+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7f0b641a2f00 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.693+0000 7f0b60ff9640 1 -- 192.168.123.105:0/3561067797 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+356 (unknown 1187553405 0 1261977095) 0x7f0b54018180 con 0x7f0b64108950 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.695+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 >> v1:192.168.123.105:6800/1920070151 conn(0x7f0b3803e7d0 legacy=0x7f0b38040c90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.695+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 >> v1:192.168.123.105:6789/0 conn(0x7f0b64108950 legacy=0x7f0b6419e510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.696+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 shutdown_connections 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.696+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 >> 192.168.123.105:0/3561067797 conn(0x7f0b6407bdf0 msgr2=0x7f0b641056f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.696+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 shutdown_connections 2026-03-10T13:35:31.845 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.696+0000 7f0b6a22d640 1 -- 192.168.123.105:0/3561067797 wait complete. 2026-03-10T13:35:31.846 INFO:teuthology.orchestra.run.vm05.stdout:Enabling cephadm module... 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.977+0000 7f52b03c1640 1 Processor -- start 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.978+0000 7f52b03c1640 1 -- start start 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.978+0000 7f52b03c1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f52a810cd80 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ae136640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f52a8108950 0x7f52a8108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56122/0 (socket says 192.168.123.105:56122) 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ae136640 1 -- 192.168.123.105:0/1537446901 learned_addr learned my addr 192.168.123.105:0/1537446901 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 28774582 0 0) 0x7f52a810cd80 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5294003620 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 7771403 0 0) 0x7f5294003620 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f52a810df60 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.979+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5290002e10 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.980+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f52900033e0 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.980+0000 7f52ad134640 1 -- 192.168.123.105:0/1537446901 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5290005780 con 0x7f52a8108950 2026-03-10T13:35:32.587 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.980+0000 7f52b03c1640 1 -- 192.168.123.105:0/1537446901 >> v1:192.168.123.105:6789/0 conn(0x7f52a8108950 legacy=0x7f52a8108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 -- 192.168.123.105:0/1537446901 shutdown_connections 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 -- 192.168.123.105:0/1537446901 >> 192.168.123.105:0/1537446901 conn(0x7f52a807bdf0 msgr2=0x7f52a807c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 -- 192.168.123.105:0/1537446901 shutdown_connections 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 -- 192.168.123.105:0/1537446901 wait complete. 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 Processor -- start 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.981+0000 7f52b03c1640 1 -- start start 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.982+0000 7f52ae136640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f52a8108950 0x7f52a819e4b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56128/0 (socket says 192.168.123.105:56128) 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.982+0000 7f52ae136640 1 -- 192.168.123.105:0/3695914686 learned_addr learned my addr 192.168.123.105:0/3695914686 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.982+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f52a819ebc0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.982+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2829829780 0 0) 0x7f52a819ebc0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.982+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5284003620 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 471623171 0 0) 0x7f5284003620 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f52a819ebc0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5290002890 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 549573222 0 0) 0x7f52a819ebc0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f52a819ed90 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f52a819f0a0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.983+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f52a81a2c30 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.984+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f5290004b90 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.984+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5290005db0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.984+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 4) ==== 50201+0+0 (unknown 3810014055 0 0) 0x7f5290012480 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.984+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5270005180 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.985+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f529004e040 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:31.987+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f52900187c0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.105+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7f5270005470 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.451+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) ==== 86+0+0 (unknown 2263024820 0 0) 0x7f52900180c0 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.451+0000 7f529f7fe640 1 -- 192.168.123.105:0/3695914686 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mgrmap(e 5) ==== 50212+0+0 (unknown 2512580191 0 0) 0x7f529004ca30 con 0x7f52a8108950 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.453+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 >> v1:192.168.123.105:6800/1920070151 conn(0x7f528403ebc0 legacy=0x7f5284041080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.453+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 >> v1:192.168.123.105:6789/0 conn(0x7f52a8108950 legacy=0x7f52a819e4b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.454+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 shutdown_connections 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.454+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 >> 192.168.123.105:0/3695914686 conn(0x7f52a807bdf0 msgr2=0x7f52a81057f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.454+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 shutdown_connections 2026-03-10T13:35:32.588 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.454+0000 7f52b03c1640 1 -- 192.168.123.105:0/3695914686 wait complete. 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:32 vm05 ceph-mon[51512]: mgrmap e4: y(active, since 2s) 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3561067797' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3695914686' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setuser ceph since I am not root 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setgroup ceph since I am not root 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:32.571+0000 7fa2f97e1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:35:32.706 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:32.622+0000 7fa2f97e1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.733+0000 7fe774dfa640 1 Processor -- start 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.733+0000 7fe774dfa640 1 -- start start 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.733+0000 7fe774dfa640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe770111530 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.733+0000 7fe76f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fe770074160 0x7fe770074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56150/0 (socket says 192.168.123.105:56150) 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76f7fe640 1 -- 192.168.123.105:0/2478958754 learned_addr learned my addr 192.168.123.105:0/2478958754 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1585600173 0 0) 0x7fe770111530 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe754003620 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 981872562 0 0) 0x7fe754003620 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe770112710 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.734+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fe760002e10 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.735+0000 7fe76e7fc640 1 -- 192.168.123.105:0/2478958754 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fe7600033e0 con 0x7fe770074160 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.735+0000 7fe774dfa640 1 -- 192.168.123.105:0/2478958754 >> v1:192.168.123.105:6789/0 conn(0x7fe770074160 legacy=0x7fe770074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- 192.168.123.105:0/2478958754 shutdown_connections 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- 192.168.123.105:0/2478958754 >> 192.168.123.105:0/2478958754 conn(0x7fe77006f4e0 msgr2=0x7fe770071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- 192.168.123.105:0/2478958754 shutdown_connections 2026-03-10T13:35:32.995 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- 192.168.123.105:0/2478958754 wait complete. 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 Processor -- start 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- start start 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.736+0000 7fe774dfa640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe77019e620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fe770074160 0x7fe77019df10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56160/0 (socket says 192.168.123.105:56160) 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76f7fe640 1 -- 192.168.123.105:0/3249480924 learned_addr learned my addr 192.168.123.105:0/3249480924 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3647022840 0 0) 0x7fe77019e620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe744003620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1382739366 0 0) 0x7fe744003620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe77019e620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fe760002bc0 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.737+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3994057010 0 0) 0x7fe77019e620 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe77019e7f0 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe774dfa640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe77019aa50 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe774dfa640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe77019af90 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fe760002d60 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe774dfa640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe770111ee0 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.738+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fe760005600 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.742+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 5) ==== 50212+0+0 (unknown 2512580191 0 0) 0x7fe760004a10 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.742+0000 7fe76effd640 1 -- 192.168.123.105:0/3249480924 >> v1:192.168.123.105:6800/1920070151 conn(0x7fe74403ec10 legacy=0x7fe7440410d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1920070151 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.742+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7fe76004d2e0 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.743+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fe760017a60 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.856+0000 7fe774dfa640 1 -- 192.168.123.105:0/3249480924 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7fe77019fc20 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.856+0000 7fe76cff9640 1 -- 192.168.123.105:0/3249480924 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) ==== 56+0+88 (unknown 3768197548 0 15966916) 0x7fe760017360 con 0x7fe770074160 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 >> v1:192.168.123.105:6800/1920070151 conn(0x7fe74403ec10 legacy=0x7fe7440410d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 >> v1:192.168.123.105:6789/0 conn(0x7fe770074160 legacy=0x7fe77019df10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 shutdown_connections 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 >> 192.168.123.105:0/3249480924 conn(0x7fe77006f4e0 msgr2=0x7fe770071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 shutdown_connections 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:32.860+0000 7fe74e7fc640 1 -- 192.168.123.105:0/3249480924 wait complete. 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-10T13:35:32.996 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 5... 2026-03-10T13:35:33.333 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:33.042+0000 7fa2f97e1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3695914686' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:33 vm05 ceph-mon[51512]: mgrmap e5: y(active, since 3s) 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3249480924' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:33.380+0000 7fa2f97e1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:35:33.761 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: from numpy import show_config as show_numpy_config 2026-03-10T13:35:33.762 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:33.465+0000 7fa2f97e1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:35:33.762 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:33.501+0000 7fa2f97e1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:35:33.762 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:33.574+0000 7fa2f97e1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:35:34.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.069+0000 7fa2f97e1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:35:34.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.180+0000 7fa2f97e1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:34.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.221+0000 7fa2f97e1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:35:34.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.257+0000 7fa2f97e1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:34.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.299+0000 7fa2f97e1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:35:34.778 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.336+0000 7fa2f97e1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:35:34.778 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.506+0000 7fa2f97e1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:35:34.778 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.555+0000 7fa2f97e1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:35.055 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:34 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:34.777+0000 7fa2f97e1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:35:35.326 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.054+0000 7fa2f97e1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:35:35.326 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.092+0000 7fa2f97e1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:35:35.326 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.132+0000 7fa2f97e1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:35:35.326 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.210+0000 7fa2f97e1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:35:35.326 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.247+0000 7fa2f97e1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:35:35.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.325+0000 7fa2f97e1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:35:35.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.434+0000 7fa2f97e1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:35.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.574+0000 7fa2f97e1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: Active manager daemon y restarted 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: Activating manager daemon y 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: mgrmap e6: y(active, starting, since 0.00762872s) 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: Manager daemon y is now available 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:35 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:36.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:35.613+0000 7fa2f97e1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.153+0000 7f9afaafd640 1 Processor -- start 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.153+0000 7f9afaafd640 1 -- start start 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.153+0000 7f9afaafd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9af4111530 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.153+0000 7f9af9afb640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9af4074160 0x7f9af4074560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56170/0 (socket says 192.168.123.105:56170) 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.153+0000 7f9af9afb640 1 -- 192.168.123.105:0/4228722895 learned_addr learned my addr 192.168.123.105:0/4228722895 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.154+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3238511265 0 0) 0x7f9af4111530 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.154+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9ad8003620 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.154+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2810354380 0 0) 0x7f9ad8003620 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.154+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9af4112710 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.154+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9af0002e10 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9af8af9640 1 -- 192.168.123.105:0/4228722895 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f9af00033e0 con 0x7f9af4074160 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9afaafd640 1 -- 192.168.123.105:0/4228722895 >> v1:192.168.123.105:6789/0 conn(0x7f9af4074160 legacy=0x7f9af4074560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9afaafd640 1 -- 192.168.123.105:0/4228722895 shutdown_connections 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9afaafd640 1 -- 192.168.123.105:0/4228722895 >> 192.168.123.105:0/4228722895 conn(0x7f9af406f4e0 msgr2=0x7f9af4071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9afaafd640 1 -- 192.168.123.105:0/4228722895 shutdown_connections 2026-03-10T13:35:36.791 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.155+0000 7f9afaafd640 1 -- 192.168.123.105:0/4228722895 wait complete. 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9afaafd640 1 Processor -- start 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9afaafd640 1 -- start start 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9afaafd640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9af41a4150 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9af9afb640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9af41a3620 0x7f9af41a3a40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:56182/0 (socket says 192.168.123.105:56182) 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9af9afb640 1 -- 192.168.123.105:0/2045346811 learned_addr learned my addr 192.168.123.105:0/2045346811 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.156+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1548735463 0 0) 0x7f9af41a4150 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9ac8003620 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1708819029 0 0) 0x7f9ac8003620 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9af41a4150 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9af0003170 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1566829576 0 0) 0x7f9af41a4150 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9af41a8b40 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.157+0000 7f9afaafd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9af41a7b30 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.158+0000 7f9afaafd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f9af41a8070 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.158+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f9af00034b0 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.159+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9af0005c50 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.159+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 5) ==== 50212+0+0 (unknown 2512580191 0 0) 0x7f9af0007370 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.159+0000 7f9af92fa640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/1920070151 conn(0x7f9ac803ec60 legacy=0x7f9ac8041100 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1920070151 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.159+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (unknown 4001592299 0 0) 0x7f9af004dc60 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.159+0000 7f9afaafd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6800/1920070151 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f9af4111d90 con 0x7f9ac803ec60 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.360+0000 7f9af92fa640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/1920070151 conn(0x7f9ac803ec60 legacy=0x7f9ac8041100 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1920070151 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:33.760+0000 7f9af92fa640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/1920070151 conn(0x7f9ac803ec60 legacy=0x7f9ac8041100 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1920070151 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:34.561+0000 7f9af92fa640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/1920070151 conn(0x7f9ac803ec60 legacy=0x7f9ac8041100 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/1920070151 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:35.620+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mgrmap(e 6) ==== 50014+0+0 (unknown 1361211852 0 0) 0x7f9af004ca30 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:35.620+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/1920070151 conn(0x7f9ac803ec60 legacy=0x7f9ac8041100 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.622+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 1734976888 0 0) 0x7f9af004d4b0 con 0x7f9af41a3620 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.622+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6800/3334108074 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f9af4111d90 con 0x7f9ac8043400 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.626+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (unknown 0 0 3832181493) 0x7f9af4111d90 con 0x7f9ac8043400 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.630+0000 7f9afaafd640 1 -- 192.168.123.105:0/2045346811 --> v1:192.168.123.105:6800/3334108074 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f9af4111f40 con 0x7f9ac8043400 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.631+0000 7f9ae2ffd640 1 -- 192.168.123.105:0/2045346811 <== mgr.14118 v1:192.168.123.105:6800/3334108074 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (unknown 0 0 96372106) 0x7f9af4111f40 con 0x7f9ac8043400 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6800/3334108074 conn(0x7f9ac8043400 legacy=0x7f9ac80457f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 >> v1:192.168.123.105:6789/0 conn(0x7f9af41a3620 legacy=0x7f9af41a3a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 shutdown_connections 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 >> 192.168.123.105:0/2045346811 conn(0x7f9af406f4e0 msgr2=0x7f9af4110d70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 shutdown_connections 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.634+0000 7f9ae0ff9640 1 -- 192.168.123.105:0/2045346811 wait complete. 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 5 is available 2026-03-10T13:35:36.792 INFO:teuthology.orchestra.run.vm05.stdout:Setting orchestrator backend to cephadm... 2026-03-10T13:35:36.886 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:36 vm05 ceph-mon[51512]: Found migration_current of "None". Setting to last migration. 2026-03-10T13:35:36.886 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:36 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:36.886 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:36 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:36.886 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:36 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:36.886 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:36 vm05 ceph-mon[51512]: mgrmap e7: y(active, since 1.01006s) 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.966+0000 7f555ef9b640 1 Processor -- start 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.967+0000 7f555ef9b640 1 -- start start 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.967+0000 7f555ef9b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f55500a8cd0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.967+0000 7f555df99640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f55500a48a0 0x7f55500a4ca0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36686/0 (socket says 192.168.123.105:36686) 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.967+0000 7f555df99640 1 -- 192.168.123.105:0/217822253 learned_addr learned my addr 192.168.123.105:0/217822253 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4249902594 0 0) 0x7f55500a8cd0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f553c003620 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3839736887 0 0) 0x7f553c003620 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f55500a9eb0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5554002e10 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.968+0000 7f555cf97640 1 -- 192.168.123.105:0/217822253 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f55540033e0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.969+0000 7f555ef9b640 1 -- 192.168.123.105:0/217822253 >> v1:192.168.123.105:6789/0 conn(0x7f55500a48a0 legacy=0x7f55500a4ca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.970+0000 7f555ef9b640 1 -- 192.168.123.105:0/217822253 shutdown_connections 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.970+0000 7f555ef9b640 1 -- 192.168.123.105:0/217822253 >> 192.168.123.105:0/217822253 conn(0x7f555009fbb0 msgr2=0x7f55500a2010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.970+0000 7f555ef9b640 1 -- 192.168.123.105:0/217822253 shutdown_connections 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.972+0000 7f555ef9b640 1 -- 192.168.123.105:0/217822253 wait complete. 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.973+0000 7f555ef9b640 1 Processor -- start 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.973+0000 7f555ef9b640 1 -- start start 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.973+0000 7f555ef9b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f555013a650 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f555df99640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f55500a48a0 0x7f5550139f40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36694/0 (socket says 192.168.123.105:36694) 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f555df99640 1 -- 192.168.123.105:0/1844398852 learned_addr learned my addr 192.168.123.105:0/1844398852 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3488253946 0 0) 0x7f555013a650 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5534003620 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 697112271 0 0) 0x7f5534003620 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f555013a650 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.974+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5554002cd0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.975+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1751330407 0 0) 0x7f555013a650 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.975+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f555013a820 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.975+0000 7f555ef9b640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f555013ab30 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.975+0000 7f555ef9b640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f555013e6c0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.976+0000 7f555ef9b640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f555013e9b0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.979+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f5554003200 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.979+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f55540055d0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.979+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 1734976888 0 0) 0x7f55540126d0 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.979+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f555404df50 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:36.980+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5554014220 con 0x7f55500a48a0 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.080+0000 7f555ef9b640 1 -- 192.168.123.105:0/1844398852 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7f5550003140 con 0x7f553403eb10 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.089+0000 7f5546ffd640 1 -- 192.168.123.105:0/1844398852 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7f5550003140 con 0x7f553403eb10 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 >> v1:192.168.123.105:6800/3334108074 conn(0x7f553403eb10 legacy=0x7f5534040fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 >> v1:192.168.123.105:6789/0 conn(0x7f55500a48a0 legacy=0x7f5550139f40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 shutdown_connections 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 >> 192.168.123.105:0/1844398852 conn(0x7f555009fbb0 msgr2=0x7f55500a2010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 shutdown_connections 2026-03-10T13:35:37.234 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.092+0000 7f5544ff9640 1 -- 192.168.123.105:0/1844398852 wait complete. 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.368+0000 7fca65c54640 1 Processor -- start 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca65c54640 1 -- start start 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca65c54640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fca6010ab30 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca5f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fca60106700 0x7fca60106b00 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36710/0 (socket says 192.168.123.105:36710) 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca5f7fe640 1 -- 192.168.123.105:0/1224937846 learned_addr learned my addr 192.168.123.105:0/1224937846 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3994501445 0 0) 0x7fca6010ab30 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.369+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fca3c003620 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 788646378 0 0) 0x7fca3c003620 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fca6010bd10 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fca50002e10 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca5e7fc640 1 -- 192.168.123.105:0/1224937846 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fca500034a0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca65c54640 1 -- 192.168.123.105:0/1224937846 >> v1:192.168.123.105:6789/0 conn(0x7fca60106700 legacy=0x7fca60106b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca65c54640 1 -- 192.168.123.105:0/1224937846 shutdown_connections 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca65c54640 1 -- 192.168.123.105:0/1224937846 >> 192.168.123.105:0/1224937846 conn(0x7fca60101e90 msgr2=0x7fca601042d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca65c54640 1 -- 192.168.123.105:0/1224937846 shutdown_connections 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.370+0000 7fca65c54640 1 -- 192.168.123.105:0/1224937846 wait complete. 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.371+0000 7fca65c54640 1 Processor -- start 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.371+0000 7fca65c54640 1 -- start start 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.371+0000 7fca65c54640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fca601a2420 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.371+0000 7fca5f7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fca60106700 0x7fca6007c480 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36724/0 (socket says 192.168.123.105:36724) 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.371+0000 7fca5f7fe640 1 -- 192.168.123.105:0/466531122 learned_addr learned my addr 192.168.123.105:0/466531122 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2310482583 0 0) 0x7fca601a2420 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fca34003620 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3167002706 0 0) 0x7fca34003620 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fca601a2420 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fca50003270 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 620940340 0 0) 0x7fca601a2420 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fca601a3600 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fca601a25f0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.372+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fca500034d0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.373+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fca50006160 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.373+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fca601a2ab0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.374+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 1734976888 0 0) 0x7fca500127b0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.374+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fca6010b8d0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.375+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7fca5004d4c0 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.377+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fca50018b80 con 0x7fca60106700 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.468+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7fca601a2df0 con 0x7fca3403e790 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.469+0000 7fca5cff9640 1 -- 192.168.123.105:0/466531122 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (unknown 0 0 2070689548) 0x7fca601a2df0 con 0x7fca3403e790 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 >> v1:192.168.123.105:6800/3334108074 conn(0x7fca3403e790 legacy=0x7fca34040c50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 >> v1:192.168.123.105:6789/0 conn(0x7fca60106700 legacy=0x7fca6007c480 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 shutdown_connections 2026-03-10T13:35:37.630 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 >> 192.168.123.105:0/466531122 conn(0x7fca60101e90 msgr2=0x7fca60192eb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:37.631 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 shutdown_connections 2026-03-10T13:35:37.631 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.474+0000 7fca65c54640 1 -- 192.168.123.105:0/466531122 wait complete. 2026-03-10T13:35:37.631 INFO:teuthology.orchestra.run.vm05.stdout:Generating ssh key... 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Generating public/private ed25519 key pair. 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Your identification has been saved in /tmp/tmpblwhrdbh/key 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Your public key has been saved in /tmp/tmpblwhrdbh/key.pub 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: The key fingerprint is: 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: SHA256:J0S2b+A+wyNKikF7wOJEyCiOxpwiS7qIrvOKXeIEGqI ceph-e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: The key's randomart image is: 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: +--[ED25519 256]--+ 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: | o | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |+ o . | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |+o + | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |O . o o | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |O% S + | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |#+o o + | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |Eoo.o . * | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |=B.= . . + | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: |@== . | 2026-03-10T13:35:38.064 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: +----[SHA256]-----+ 2026-03-10T13:35:38.185 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.761+0000 7f5d770f1640 1 Processor -- start 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.762+0000 7f5d770f1640 1 -- start start 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.762+0000 7f5d770f1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5d7010cbe0 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.762+0000 7f5d760ef640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5d701087b0 0x7f5d70108bb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36738/0 (socket says 192.168.123.105:36738) 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.762+0000 7f5d760ef640 1 -- 192.168.123.105:0/2774409774 learned_addr learned my addr 192.168.123.105:0/2774409774 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.762+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4009194340 0 0) 0x7f5d7010cbe0 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5d54003620 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 65689810 0 0) 0x7f5d54003620 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5d7010ddc0 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5d60002e10 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d750ed640 1 -- 192.168.123.105:0/2774409774 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f5d600033e0 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d770f1640 1 -- 192.168.123.105:0/2774409774 >> v1:192.168.123.105:6789/0 conn(0x7f5d701087b0 legacy=0x7f5d70108bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d770f1640 1 -- 192.168.123.105:0/2774409774 shutdown_connections 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.763+0000 7f5d770f1640 1 -- 192.168.123.105:0/2774409774 >> 192.168.123.105:0/2774409774 conn(0x7f5d7007bc90 msgr2=0x7f5d7007c0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d770f1640 1 -- 192.168.123.105:0/2774409774 shutdown_connections 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d770f1640 1 -- 192.168.123.105:0/2774409774 wait complete. 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d770f1640 1 Processor -- start 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d770f1640 1 -- start start 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d770f1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5d70080c20 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d760ef640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5d701087b0 0x7f5d70080510 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36754/0 (socket says 192.168.123.105:36754) 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.764+0000 7f5d760ef640 1 -- 192.168.123.105:0/3592973527 learned_addr learned my addr 192.168.123.105:0/3592973527 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1170774918 0 0) 0x7f5d70080c20 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5d4c003620 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1999054927 0 0) 0x7f5d4c003620 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5d70080c20 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5d60003170 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1057070317 0 0) 0x7f5d70080c20 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5d70080df0 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5d7007d050 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f5d60004d10 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f5d60006130 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.765+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5d7007d510 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.766+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (unknown 1734976888 0 0) 0x7f5d60007400 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.766+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5d38005180 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.767+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f5d6004dd70 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.769+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5d60018670 con 0x7f5d701087b0 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.861+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7f5d38002bf0 con 0x7f5d4c03eb40 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.886+0000 7f5d677fe640 1 -- 192.168.123.105:0/3592973527 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7f5d38002bf0 con 0x7f5d4c03eb40 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 >> v1:192.168.123.105:6800/3334108074 conn(0x7f5d4c03eb40 legacy=0x7f5d4c041000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 >> v1:192.168.123.105:6789/0 conn(0x7f5d701087b0 legacy=0x7f5d70080510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 shutdown_connections 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 >> 192.168.123.105:0/3592973527 conn(0x7f5d7007bc90 msgr2=0x7f5d70105090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 shutdown_connections 2026-03-10T13:35:38.186 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:37.888+0000 7f5d770f1640 1 -- 192.168.123.105:0/3592973527 wait complete. 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='client.14122 v1:192.168.123.105:0/2045346811' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='client.14122 v1:192.168.123.105:0/2045346811' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='client.14130 v1:192.168.123.105:0/1844398852' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:37] ENGINE Bus STARTING 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:37] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:37] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:37] ENGINE Bus STARTED 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:37] ENGINE Client ('192.168.123.105', 44358) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:35:38.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:38.315 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='client.14132 v1:192.168.123.105:0/466531122' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:38.315 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:38.315 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:38 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOiSjLYMQpdq6CS2mH43c483nurQgxF4IVVwFK6/SzGc ceph-e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.313+0000 7f6812130640 1 Processor -- start 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.314+0000 7f6812130640 1 -- start start 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.314+0000 7f6812130640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f680c07b360 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.314+0000 7f680b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f680c07c7f0 0x7f680c07ac50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36756/0 (socket says 192.168.123.105:36756) 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.314+0000 7f680b7fe640 1 -- 192.168.123.105:0/3263871020 learned_addr learned my addr 192.168.123.105:0/3263871020 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2779985892 0 0) 0x7f680c07b360 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f67f4003620 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 705601643 0 0) 0x7f67f4003620 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f680c10a0c0 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f67fc002e10 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.315+0000 7f680a7fc640 1 -- 192.168.123.105:0/3263871020 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f67fc0033e0 con 0x7f680c07c7f0 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 -- 192.168.123.105:0/3263871020 >> v1:192.168.123.105:6789/0 conn(0x7f680c07c7f0 legacy=0x7f680c07ac50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.585 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 -- 192.168.123.105:0/3263871020 shutdown_connections 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 -- 192.168.123.105:0/3263871020 >> 192.168.123.105:0/3263871020 conn(0x7f680c101e40 msgr2=0x7f680c1042a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 -- 192.168.123.105:0/3263871020 shutdown_connections 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 -- 192.168.123.105:0/3263871020 wait complete. 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.316+0000 7f6812130640 1 Processor -- start 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6812130640 1 -- start start 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6812130640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f680c1abac0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f680b7fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f680c07c7f0 0x7f680c1ab3b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36758/0 (socket says 192.168.123.105:36758) 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f680b7fe640 1 -- 192.168.123.105:0/866114034 learned_addr learned my addr 192.168.123.105:0/866114034 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4276602883 0 0) 0x7f680c1abac0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f67e0003620 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3739477885 0 0) 0x7f67e0003620 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f680c1abac0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.317+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f67fc0030f0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.318+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3792268532 0 0) 0x7f680c1abac0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.318+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f680c1abc90 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.318+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f680c1abfa0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.318+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f680c1afb30 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.319+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f67fc003450 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.319+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f67fc0061a0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.319+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7f67fc007470 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.319+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f67d0005180 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.322+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f67fc04e070 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.322+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f67fc007fa0 con 0x7f680c07c7f0 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.417+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7f67d0002bf0 con 0x7f67e003ec60 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.418+0000 7f6808ff9640 1 -- 192.168.123.105:0/866114034 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (unknown 0 0 1857503290) 0x7f67d0002bf0 con 0x7f67e003ec60 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.420+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 >> v1:192.168.123.105:6800/3334108074 conn(0x7f67e003ec60 legacy=0x7f67e0041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.420+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 >> v1:192.168.123.105:6789/0 conn(0x7f680c07c7f0 legacy=0x7f680c1ab3b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.421+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 shutdown_connections 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.421+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 >> 192.168.123.105:0/866114034 conn(0x7f680c101e40 msgr2=0x7f680c108360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.421+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 shutdown_connections 2026-03-10T13:35:38.586 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.421+0000 7f6812130640 1 -- 192.168.123.105:0/866114034 wait complete. 2026-03-10T13:35:38.587 INFO:teuthology.orchestra.run.vm05.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:35:38.587 INFO:teuthology.orchestra.run.vm05.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T13:35:38.587 INFO:teuthology.orchestra.run.vm05.stdout:Adding host vm05... 2026-03-10T13:35:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:39 vm05 ceph-mon[51512]: from='client.14134 v1:192.168.123.105:0/3592973527' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:39 vm05 ceph-mon[51512]: Generating ssh key... 2026-03-10T13:35:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:39 vm05 ceph-mon[51512]: mgrmap e8: y(active, since 2s) 2026-03-10T13:35:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:39 vm05 ceph-mon[51512]: from='client.14136 v1:192.168.123.105:0/866114034' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:40.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:40 vm05 ceph-mon[51512]: from='client.14138 v1:192.168.123.105:0/1117278794' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:40.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:40 vm05 ceph-mon[51512]: Deploying cephadm binary to vm05 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Added host 'vm05' with addr '192.168.123.105' 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2fd1e7640 1 Processor -- start 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2fd1e7640 1 -- start start 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2fd1e7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc2f810cbe0 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2f7fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fc2f81087b0 0x7fc2f8108bb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36770/0 (socket says 192.168.123.105:36770) 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2f7fff640 1 -- 192.168.123.105:0/2105050235 learned_addr learned my addr 192.168.123.105:0/2105050235 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 47043700 0 0) 0x7fc2f810cbe0 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.714+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc2dc003620 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1407243788 0 0) 0x7fc2dc003620 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc2f810ddc0 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fc2e8002e10 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2f6ffd640 1 -- 192.168.123.105:0/2105050235 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fc2e80034a0 con 0x7fc2f81087b0 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/2105050235 >> v1:192.168.123.105:6789/0 conn(0x7fc2f81087b0 legacy=0x7fc2f8108bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/2105050235 shutdown_connections 2026-03-10T13:35:40.438 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/2105050235 >> 192.168.123.105:0/2105050235 conn(0x7fc2f807bc90 msgr2=0x7fc2f807c0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.715+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/2105050235 shutdown_connections 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/2105050235 wait complete. 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2fd1e7640 1 Processor -- start 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2fd1e7640 1 -- start start 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2fd1e7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc2f819d2d0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2f7fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fc2f81087b0 0x7fc2f8199e10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36774/0 (socket says 192.168.123.105:36774) 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2f7fff640 1 -- 192.168.123.105:0/1117278794 learned_addr learned my addr 192.168.123.105:0/1117278794 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.716+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3125925801 0 0) 0x7fc2f819d2d0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc2d0003620 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2646458261 0 0) 0x7fc2d0003620 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc2f819d2d0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fc2e80031f0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2308357699 0 0) 0x7fc2f819d2d0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc2f819d4a0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fc2f819a520 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.717+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fc2f819a9e0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.718+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fc2e80034a0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.719+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fc2e8005eb0 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.719+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc2f810d830 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.721+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7fc2e8012560 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.721+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7fc2e804e320 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.722+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fc2e8018a70 con 0x7fc2f81087b0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:38.819+0000 7fc2fd1e7640 1 -- 192.168.123.105:0/1117278794 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}) -- 0x7fc2f8107370 con 0x7fc2d003e8c0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.295+0000 7fc2f57fa640 1 -- 192.168.123.105:0/1117278794 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (unknown 0 0 2505307444) 0x7fc2f8107370 con 0x7fc2d003e8c0 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.299+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 >> v1:192.168.123.105:6800/3334108074 conn(0x7fc2d003e8c0 legacy=0x7fc2d0040d80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.299+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 >> v1:192.168.123.105:6789/0 conn(0x7fc2f81087b0 legacy=0x7fc2f8199e10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.299+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 shutdown_connections 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.299+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 >> 192.168.123.105:0/1117278794 conn(0x7fc2f807bc90 msgr2=0x7fc2f81055d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.299+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 shutdown_connections 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.300+0000 7fc2caffd640 1 -- 192.168.123.105:0/1117278794 wait complete. 2026-03-10T13:35:40.439 INFO:teuthology.orchestra.run.vm05.stdout:Deploying unmanaged mon service... 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.579+0000 7f655fc63640 1 Processor -- start 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.580+0000 7f655fc63640 1 -- start start 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.580+0000 7f655fc63640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f655810cd80 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655d9d8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6558108950 0x7f6558108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36780/0 (socket says 192.168.123.105:36780) 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655d9d8640 1 -- 192.168.123.105:0/385291231 learned_addr learned my addr 192.168.123.105:0/385291231 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3794203050 0 0) 0x7f655810cd80 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6540003620 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2182116033 0 0) 0x7f6540003620 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.581+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f655810df60 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f654c002e10 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f654c0034e0 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655c9d6640 1 -- 192.168.123.105:0/385291231 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f654c0059d0 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655fc63640 1 -- 192.168.123.105:0/385291231 >> v1:192.168.123.105:6789/0 conn(0x7f6558108950 legacy=0x7f6558108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655fc63640 1 -- 192.168.123.105:0/385291231 shutdown_connections 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655fc63640 1 -- 192.168.123.105:0/385291231 >> 192.168.123.105:0/385291231 conn(0x7f655807bdf0 msgr2=0x7f655807c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655fc63640 1 -- 192.168.123.105:0/385291231 shutdown_connections 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.582+0000 7f655fc63640 1 -- 192.168.123.105:0/385291231 wait complete. 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.583+0000 7f655fc63640 1 Processor -- start 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.583+0000 7f655fc63640 1 -- start start 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.583+0000 7f655fc63640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f655819ec60 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.583+0000 7f655d9d8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6558108950 0x7f655819e550 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36790/0 (socket says 192.168.123.105:36790) 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.583+0000 7f655d9d8640 1 -- 192.168.123.105:0/4293061812 learned_addr learned my addr 192.168.123.105:0/4293061812 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 6222491 0 0) 0x7f655819ec60 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f652c003620 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3565202601 0 0) 0x7f652c003620 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f655819ec60 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f654c002890 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 240003024 0 0) 0x7f655819ec60 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.584+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f655819ee30 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.585+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f655819f140 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.585+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f65581a2c50 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.586+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f655810db20 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.586+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f654c0032e0 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.587+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f654c006270 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.587+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7f654c007540 con 0x7f6558108950 2026-03-10T13:35:40.834 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.587+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f654c04e3a0 con 0x7f6558108950 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.589+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f654c018af0 con 0x7f6558108950 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.685+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f65581a3000 con 0x7f652c03e870 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.689+0000 7f6546ffd640 1 -- 192.168.123.105:0/4293061812 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 3265049985) 0x7f65581a3000 con 0x7f652c03e870 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.698+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 >> v1:192.168.123.105:6800/3334108074 conn(0x7f652c03e870 legacy=0x7f652c040d30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.698+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 >> v1:192.168.123.105:6789/0 conn(0x7f6558108950 legacy=0x7f655819e550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.699+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 shutdown_connections 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.699+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 >> 192.168.123.105:0/4293061812 conn(0x7f655807bdf0 msgr2=0x7f65581057f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.700+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 shutdown_connections 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.702+0000 7f655fc63640 1 -- 192.168.123.105:0/4293061812 wait complete. 2026-03-10T13:35:40.835 INFO:teuthology.orchestra.run.vm05.stdout:Deploying unmanaged mgr service... 2026-03-10T13:35:41.191 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.949+0000 7f816159c640 1 Processor -- start 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.950+0000 7f816159c640 1 -- start start 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.950+0000 7f816159c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f815c10cbe0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.951+0000 7f815bfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f815c1087b0 0x7f815c108bb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36794/0 (socket says 192.168.123.105:36794) 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.951+0000 7f815bfff640 1 -- 192.168.123.105:0/859623497 learned_addr learned my addr 192.168.123.105:0/859623497 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.951+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3723361456 0 0) 0x7f815c10cbe0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.951+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8144003620 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 421646851 0 0) 0x7f8144003620 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f815c10ddc0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8140002e10 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f81400033e0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f815affd640 1 -- 192.168.123.105:0/859623497 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8140005780 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.952+0000 7f816159c640 1 -- 192.168.123.105:0/859623497 >> v1:192.168.123.105:6789/0 conn(0x7f815c1087b0 legacy=0x7f815c108bb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- 192.168.123.105:0/859623497 shutdown_connections 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- 192.168.123.105:0/859623497 >> 192.168.123.105:0/859623497 conn(0x7f815c07bc90 msgr2=0x7f815c07c0a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- 192.168.123.105:0/859623497 shutdown_connections 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- 192.168.123.105:0/859623497 wait complete. 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 Processor -- start 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- start start 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.953+0000 7f816159c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f815c19ea20 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f815bfff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f815c1087b0 0x7f815c19e310 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36802/0 (socket says 192.168.123.105:36802) 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f815bfff640 1 -- 192.168.123.105:0/178297727 learned_addr learned my addr 192.168.123.105:0/178297727 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1732836369 0 0) 0x7f815c19ea20 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8134003620 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 702398777 0 0) 0x7f8134003620 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f815c19ea20 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.954+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8140002890 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.955+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2290524177 0 0) 0x7f815c19ea20 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.955+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f815c19ebf0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.955+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f8140004b90 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.955+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f815c19ef00 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.955+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f81400061d0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.956+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f815c1a2a90 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.956+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7f8140002a50 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.956+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f81400030f0 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.957+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f815c10d830 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:40.959+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8140017b70 con 0x7f815c1087b0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.058+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 --> v1:192.168.123.105:6800/3334108074 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f815c106fe0 con 0x7f813403eac0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.062+0000 7f81597fa640 1 -- 192.168.123.105:0/178297727 <== mgr.14118 v1:192.168.123.105:6800/3334108074 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 325935098) 0x7f815c106fe0 con 0x7f813403eac0 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.064+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 >> v1:192.168.123.105:6800/3334108074 conn(0x7f813403eac0 legacy=0x7f8134040f80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.064+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 >> v1:192.168.123.105:6789/0 conn(0x7f815c1087b0 legacy=0x7f815c19e310 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.064+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 shutdown_connections 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.064+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 >> 192.168.123.105:0/178297727 conn(0x7f815c07bc90 msgr2=0x7f815c105790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.065+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 shutdown_connections 2026-03-10T13:35:41.192 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.065+0000 7f816159c640 1 -- 192.168.123.105:0/178297727 wait complete. 2026-03-10T13:35:41.438 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:41 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:41.438 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:41 vm05 ceph-mon[51512]: Added host vm05 2026-03-10T13:35:41.439 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:41 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:41.439 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:41 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:41.439 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:41 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.320+0000 7fb857df8640 1 Processor -- start 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.320+0000 7fb857df8640 1 -- start start 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.320+0000 7fb857df8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb8501088f0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.320+0000 7fb855b6d640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb850104510 0x7fb850104910 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36804/0 (socket says 192.168.123.105:36804) 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.320+0000 7fb855b6d640 1 -- 192.168.123.105:0/2410082045 learned_addr learned my addr 192.168.123.105:0/2410082045 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3302010530 0 0) 0x7fb8501088f0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb834003620 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2715192570 0 0) 0x7fb834003620 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb850109ad0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb838002e10 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.321+0000 7fb854b6b640 1 -- 192.168.123.105:0/2410082045 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fb8380034a0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 -- 192.168.123.105:0/2410082045 >> v1:192.168.123.105:6789/0 conn(0x7fb850104510 legacy=0x7fb850104910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 -- 192.168.123.105:0/2410082045 shutdown_connections 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 -- 192.168.123.105:0/2410082045 >> 192.168.123.105:0/2410082045 conn(0x7fb850100090 msgr2=0x7fb8501024b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 -- 192.168.123.105:0/2410082045 shutdown_connections 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 -- 192.168.123.105:0/2410082045 wait complete. 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.322+0000 7fb857df8640 1 Processor -- start 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb857df8640 1 -- start start 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb857df8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb8501a3150 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb855b6d640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb850104510 0x7fb8501a2a40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36814/0 (socket says 192.168.123.105:36814) 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb855b6d640 1 -- 192.168.123.105:0/3950193774 learned_addr learned my addr 192.168.123.105:0/3950193774 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1937351135 0 0) 0x7fb8501a3150 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.323+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb824003620 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 207670230 0 0) 0x7fb824003620 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb8501a3150 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb838003270 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 485469185 0 0) 0x7fb8501a3150 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb8501a3320 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb8501a3630 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.324+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb8501a71c0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.325+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7fb838004fc0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.325+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fb838006110 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.325+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7fb8380073e0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.326+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7fb83804e4c0 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.326+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb818005180 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.330+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fb838018c10 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.420+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7fb818005470 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.425+0000 7fb846ffd640 1 -- 192.168.123.105:0/3950193774 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (unknown 1123546310 0 0) 0x7fb838018510 con 0x7fb850104510 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.430+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 >> v1:192.168.123.105:6800/3334108074 conn(0x7fb82403ecd0 legacy=0x7fb824041190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.579 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.430+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 >> v1:192.168.123.105:6789/0 conn(0x7fb850104510 legacy=0x7fb8501a2a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:41.580 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.430+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 shutdown_connections 2026-03-10T13:35:41.580 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.430+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 >> 192.168.123.105:0/3950193774 conn(0x7fb850100090 msgr2=0x7fb850100470 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:41.580 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.431+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 shutdown_connections 2026-03-10T13:35:41.580 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.431+0000 7fb857df8640 1 -- 192.168.123.105:0/3950193774 wait complete. 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.701+0000 7f106e146640 1 Processor -- start 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.702+0000 7f106e146640 1 -- start start 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.702+0000 7f106e146640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f106810cd80 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.702+0000 7f10677fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f1068108950 0x7f1068108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36828/0 (socket says 192.168.123.105:36828) 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.702+0000 7f10677fe640 1 -- 192.168.123.105:0/4087207735 learned_addr learned my addr 192.168.123.105:0/4087207735 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.702+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1791555777 0 0) 0x7f106810cd80 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1050014a30 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 691006452 0 0) 0x7f1050014a30 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f106810df60 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f1058002e10 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f10667fc640 1 -- 192.168.123.105:0/4087207735 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f10580033e0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.703+0000 7f106e146640 1 -- 192.168.123.105:0/4087207735 >> v1:192.168.123.105:6789/0 conn(0x7f1068108950 legacy=0x7f1068108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- 192.168.123.105:0/4087207735 shutdown_connections 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- 192.168.123.105:0/4087207735 >> 192.168.123.105:0/4087207735 conn(0x7f106807bdf0 msgr2=0x7f106807c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- 192.168.123.105:0/4087207735 shutdown_connections 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- 192.168.123.105:0/4087207735 wait complete. 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 Processor -- start 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- start start 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f106e146640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f1068196150 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f10677fe640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f1068108950 0x7f1068195a40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36842/0 (socket says 192.168.123.105:36842) 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.704+0000 7f10677fe640 1 -- 192.168.123.105:0/463622433 learned_addr learned my addr 192.168.123.105:0/463622433 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1214046945 0 0) 0x7f1068196150 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1040003620 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2327633641 0 0) 0x7f1040003620 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f1068196150 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f1058003170 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2873453447 0 0) 0x7f1068196150 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1068196320 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f1068196630 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.705+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f106819a1c0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.706+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f1058002900 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.706+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f1058005d30 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.707+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7f10580123e0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.707+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f105804e1a0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.707+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1034005180 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.710+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f10580188f0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.809+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7f1034005470 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.811+0000 7f1064ff9640 1 -- 192.168.123.105:0/463622433 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (unknown 1336629364 0 0) 0x7f10580181f0 con 0x7f1068108950 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.820+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 >> v1:192.168.123.105:6800/3334108074 conn(0x7f104003ecd0 legacy=0x7f1040041190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.820+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 >> v1:192.168.123.105:6789/0 conn(0x7f1068108950 legacy=0x7f1068195a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.823+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 shutdown_connections 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.823+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 >> 192.168.123.105:0/463622433 conn(0x7f106807bdf0 msgr2=0x7f1068105760 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.824+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 shutdown_connections 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:41.825+0000 7f106e146640 1 -- 192.168.123.105:0/463622433 wait complete. 2026-03-10T13:35:42.080 INFO:teuthology.orchestra.run.vm05.stdout:Enabling the dashboard module... 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='client.14140 v1:192.168.123.105:0/4293061812' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: Saving service mon spec with placement count:5 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='client.14142 v1:192.168.123.105:0/178297727' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: Saving service mgr spec with placement count:2 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3950193774' entity='client.admin' 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/463622433' entity='client.admin' 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3028328979' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.225+0000 7f739455d640 1 Processor -- start 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.226+0000 7f739455d640 1 -- start start 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.226+0000 7f739455d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f738c10ab20 con 0x7f738c1066f0 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.226+0000 7f73922d2640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f738c1066f0 0x7f738c106af0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36856/0 (socket says 192.168.123.105:36856) 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.226+0000 7f73922d2640 1 -- 192.168.123.105:0/4035822627 learned_addr learned my addr 192.168.123.105:0/4035822627 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:43.498 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.227+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2988104632 0 0) 0x7f738c10ab20 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.227+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7374003620 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.227+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1459906006 0 0) 0x7f7374003620 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.228+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f738c10bd00 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.228+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f737c002e10 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.228+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f737c0034a0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.228+0000 7f73912d0640 1 -- 192.168.123.105:0/4035822627 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f737c0057e0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.228+0000 7f739455d640 1 -- 192.168.123.105:0/4035822627 >> v1:192.168.123.105:6789/0 conn(0x7f738c1066f0 legacy=0x7f738c106af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 -- 192.168.123.105:0/4035822627 shutdown_connections 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 -- 192.168.123.105:0/4035822627 >> 192.168.123.105:0/4035822627 conn(0x7f738c101e60 msgr2=0x7f738c1042c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 -- 192.168.123.105:0/4035822627 shutdown_connections 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 -- 192.168.123.105:0/4035822627 wait complete. 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 Processor -- start 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.229+0000 7f739455d640 1 -- start start 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.230+0000 7f739455d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f738c19eba0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.230+0000 7f73922d2640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f738c1066f0 0x7f738c19e490 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36870/0 (socket says 192.168.123.105:36870) 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.230+0000 7f73922d2640 1 -- 192.168.123.105:0/3028328979 learned_addr learned my addr 192.168.123.105:0/3028328979 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.230+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 39000593 0 0) 0x7f738c19eba0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.230+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7360003620 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.231+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 196092260 0 0) 0x7f7360003620 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.231+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f738c19eba0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.231+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f737c002890 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.231+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1959371393 0 0) 0x7f738c19eba0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.232+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f738c19ed70 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.232+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 973+0+0 (unknown 630610420 0 0) 0x7f737c004b90 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.232+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f738c19f080 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.232+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f738c1a2c10 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.233+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7350005180 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.234+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f737c006120 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.234+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (unknown 2179938244 0 0) 0x7f737c0127b0 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.234+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f737c04e100 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.237+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f737c018850 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.337+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 10 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f737c018150 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:42.367+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7f7350005740 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.339+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) ==== 88+0+0 (unknown 1498667528 0 0) 0x7f737c005b30 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.343+0000 7f737b7fe640 1 -- 192.168.123.105:0/3028328979 <== mon.0 v1:192.168.123.105:6789/0 12 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 573055355 0 0) 0x7f737c04ce40 con 0x7f738c1066f0 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.343+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 >> v1:192.168.123.105:6800/3334108074 conn(0x7f736003ec30 legacy=0x7f73600410f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.344+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 >> v1:192.168.123.105:6789/0 conn(0x7f738c1066f0 legacy=0x7f738c19e490 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.344+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 shutdown_connections 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.344+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 >> 192.168.123.105:0/3028328979 conn(0x7f738c101e60 msgr2=0x7f738c1022b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.347+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 shutdown_connections 2026-03-10T13:35:43.499 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.347+0000 7f739455d640 1 -- 192.168.123.105:0/3028328979 wait complete. 2026-03-10T13:35:43.602 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:43 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:43 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:43 vm05 ceph-mon[51512]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3028328979' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:43 vm05 ceph-mon[51512]: mgrmap e9: y(active, since 7s) 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:43 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setuser ceph since I am not root 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:43 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setgroup ceph since I am not root 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:43 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:43.518+0000 7ff804f4a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:35:43.603 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:43 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:43.573+0000 7ff804f4a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.657+0000 7f8e4a420640 1 Processor -- start 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.657+0000 7f8e4a420640 1 -- start start 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.657+0000 7f8e4a420640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8e44074770 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.658+0000 7f8e4941e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8e44073bd0 0x7f8e44073fd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36898/0 (socket says 192.168.123.105:36898) 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.658+0000 7f8e4941e640 1 -- 192.168.123.105:0/3236615019 learned_addr learned my addr 192.168.123.105:0/3236615019 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.659+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4088808981 0 0) 0x7f8e44074770 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.659+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8e2c003600 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2404221884 0 0) 0x7f8e2c003600 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8e4407d060 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8e34002e10 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8e34003400 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e33fff640 1 -- 192.168.123.105:0/3236615019 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8e340059d0 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.660+0000 7f8e4a420640 1 -- 192.168.123.105:0/3236615019 >> v1:192.168.123.105:6789/0 conn(0x7f8e44073bd0 legacy=0x7f8e44073fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.661+0000 7f8e4a420640 1 -- 192.168.123.105:0/3236615019 shutdown_connections 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.661+0000 7f8e4a420640 1 -- 192.168.123.105:0/3236615019 >> 192.168.123.105:0/3236615019 conn(0x7f8e4406f4e0 msgr2=0x7f8e44071920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.661+0000 7f8e4a420640 1 -- 192.168.123.105:0/3236615019 shutdown_connections 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.661+0000 7f8e4a420640 1 -- 192.168.123.105:0/3236615019 wait complete. 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.661+0000 7f8e4a420640 1 Processor -- start 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e4a420640 1 -- start start 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e4a420640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8e44081520 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e4941e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f8e44073bd0 0x7f8e44080e10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36900/0 (socket says 192.168.123.105:36900) 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e4941e640 1 -- 192.168.123.105:0/1095205776 learned_addr learned my addr 192.168.123.105:0/1095205776 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1448777281 0 0) 0x7f8e44081520 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.662+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8e24003650 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3279841213 0 0) 0x7f8e24003650 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8e44081520 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8e34002890 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2445684816 0 0) 0x7f8e44081520 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8e440816f0 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e4a420640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8e4407d950 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.663+0000 7f8e4a420640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8e4407de90 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.664+0000 7f8e4a420640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8e1c005180 con 0x7f8e44073bd0 2026-03-10T13:35:43.918 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.664+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8e340055f0 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.664+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f8e34005ec0 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.665+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 573055355 0 0) 0x7f8e340125a0 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.665+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f8e3404e260 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.667+0000 7f8e48c1d640 1 -- 192.168.123.105:0/1095205776 >> v1:192.168.123.105:6800/3334108074 conn(0x7f8e2403ec90 legacy=0x7f8e24041150 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3334108074 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.668+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8e34018930 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.783+0000 7f8e4a420640 1 -- 192.168.123.105:0/1095205776 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f8e1c005d40 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.785+0000 7f8e327fc640 1 -- 192.168.123.105:0/1095205776 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) ==== 56+0+88 (unknown 2005831594 0 1748205903) 0x7f8e34018230 con 0x7f8e44073bd0 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 >> v1:192.168.123.105:6800/3334108074 conn(0x7f8e2403ec90 legacy=0x7f8e24041150 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 >> v1:192.168.123.105:6789/0 conn(0x7f8e44073bd0 legacy=0x7f8e44080e10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 shutdown_connections 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 >> 192.168.123.105:0/1095205776 conn(0x7f8e4406f4e0 msgr2=0x7f8e44079830 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 shutdown_connections 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:43.788+0000 7f8e13fff640 1 -- 192.168.123.105:0/1095205776 wait complete. 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for the mgr to restart... 2026-03-10T13:35:43.919 INFO:teuthology.orchestra.run.vm05.stdout:Waiting for mgr epoch 9... 2026-03-10T13:35:44.265 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:44.028+0000 7ff804f4a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1095205776' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:44.357+0000 7ff804f4a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: from numpy import show_config as show_numpy_config 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:44.441+0000 7ff804f4a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:44.478+0000 7ff804f4a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:35:44.552 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:44 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:44.551+0000 7ff804f4a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:35:45.296 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.040+0000 7ff804f4a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:35:45.316 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.148+0000 7ff804f4a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:45.316 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.188+0000 7ff804f4a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:35:45.316 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.222+0000 7ff804f4a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:45.317 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.261+0000 7ff804f4a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:35:45.317 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.295+0000 7ff804f4a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:35:45.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.453+0000 7ff804f4a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:35:45.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.502+0000 7ff804f4a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:35:45.999 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:45 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.729+0000 7ff804f4a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:35:45.999 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:45.999+0000 7ff804f4a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:35:46.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.033+0000 7ff804f4a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:35:46.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.072+0000 7ff804f4a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:35:46.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.144+0000 7ff804f4a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:35:46.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.178+0000 7ff804f4a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:35:46.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.256+0000 7ff804f4a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:35:46.529 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.363+0000 7ff804f4a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:35:46.529 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.493+0000 7ff804f4a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:35:46 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:35:46.528+0000 7ff804f4a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: Active manager daemon y restarted 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: Activating manager daemon y 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: mgrmap e10: y(active, starting, since 0.00717009s) 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: Manager daemon y is now available 2026-03-10T13:35:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.057+0000 7f98858aa640 1 Processor -- start 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.057+0000 7f98858aa640 1 -- start start 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.057+0000 7f98858aa640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9878009900 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.057+0000 7f98848a8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9878005460 0x7f9878005860 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36912/0 (socket says 192.168.123.105:36912) 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.057+0000 7f98848a8640 1 -- 192.168.123.105:0/2530233261 learned_addr learned my addr 192.168.123.105:0/2530233261 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.058+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 227885971 0 0) 0x7f9878009900 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.058+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f986c003620 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.058+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2056586074 0 0) 0x7f986c003620 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.058+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f987800aae0 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.059+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9874002a70 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.059+0000 7f987f7fe640 1 -- 192.168.123.105:0/2530233261 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9874003060 con 0x7f9878005460 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.059+0000 7f98858aa640 1 -- 192.168.123.105:0/2530233261 >> v1:192.168.123.105:6789/0 conn(0x7f9878005460 legacy=0x7f9878005860 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.059+0000 7f98858aa640 1 -- 192.168.123.105:0/2530233261 shutdown_connections 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.059+0000 7f98858aa640 1 -- 192.168.123.105:0/2530233261 >> 192.168.123.105:0/2530233261 conn(0x7f987809fbb0 msgr2=0x7f98780a2010 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98858aa640 1 -- 192.168.123.105:0/2530233261 shutdown_connections 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98858aa640 1 -- 192.168.123.105:0/2530233261 wait complete. 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98858aa640 1 Processor -- start 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98858aa640 1 -- start start 2026-03-10T13:35:47.702 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98858aa640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f987814f2f0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98848a8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f9878005460 0x7f9878015a40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:36916/0 (socket says 192.168.123.105:36916) 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.060+0000 7f98848a8640 1 -- 192.168.123.105:0/3925643261 learned_addr learned my addr 192.168.123.105:0/3925643261 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.061+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2365341201 0 0) 0x7f987814f2f0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.061+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9858003620 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.061+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3267154008 0 0) 0x7f9858003620 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.061+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f987814f2f0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.062+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9874004e50 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.062+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1765219138 0 0) 0x7f987814f2f0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.062+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f98781504d0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.062+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f987814f4c0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.063+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9874002c70 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.063+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f9874006600 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.063+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f987814f980 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.064+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 9) ==== 50225+0+0 (unknown 573055355 0 0) 0x7f9874006860 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.064+0000 7f987ffff640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3334108074 conn(0x7f985803ec10 legacy=0x7f98580410d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3334108074 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.064+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (unknown 1497606905 0 0) 0x7f987404e8d0 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.065+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6800/3334108074 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f984c000d10 con 0x7f985803ec10 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.264+0000 7f987ffff640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3334108074 conn(0x7f985803ec10 legacy=0x7f98580410d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3334108074 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:44.664+0000 7f987ffff640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3334108074 conn(0x7f985803ec10 legacy=0x7f98580410d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3334108074 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:45.465+0000 7f987ffff640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3334108074 conn(0x7f985803ec10 legacy=0x7f98580410d0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3334108074 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:46.534+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mgrmap(e 10) ==== 50027+0+0 (unknown 4048559269 0 0) 0x7f9874008000 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:46.535+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3334108074 conn(0x7f985803ec10 legacy=0x7f98580410d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.538+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 2320845000 0 0) 0x7f9874021180 con 0x7f9878005460 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.539+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6800/3845654103 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f9874017170 con 0x7f9858043540 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.542+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (unknown 0 0 3832181493) 0x7f9874017170 con 0x7f9858043540 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.545+0000 7f98577fe640 1 -- 192.168.123.105:0/3925643261 --> v1:192.168.123.105:6800/3845654103 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7f9840000d10 con 0x7f9858043540 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.546+0000 7f987dffb640 1 -- 192.168.123.105:0/3925643261 <== mgr.14150 v1:192.168.123.105:6800/3845654103 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (unknown 0 0 3086460295) 0x7f9840000d10 con 0x7f9858043540 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.546+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6800/3845654103 conn(0x7f9858043540 legacy=0x7f9858045930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.546+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 >> v1:192.168.123.105:6789/0 conn(0x7f9878005460 legacy=0x7f9878015a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.547+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 shutdown_connections 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.547+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 >> 192.168.123.105:0/3925643261 conn(0x7f987809fbb0 msgr2=0x7f98780a1ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.547+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 shutdown_connections 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.547+0000 7f98858aa640 1 -- 192.168.123.105:0/3925643261 wait complete. 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:mgr epoch 9 is available 2026-03-10T13:35:47.703 INFO:teuthology.orchestra.run.vm05.stdout:Generating a dashboard self-signed certificate... 2026-03-10T13:35:47.790 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:47.790 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:47] ENGINE Bus STARTING 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:47] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:47] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:47] ENGINE Bus STARTED 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: [10/Mar/2026:13:35:47] ENGINE Client ('192.168.123.105', 35700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:47.791 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:47 vm05 ceph-mon[51512]: mgrmap e11: y(active, since 1.0106s) 2026-03-10T13:35:48.147 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.864+0000 7fbd498f9640 1 Processor -- start 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.864+0000 7fbd498f9640 1 -- start start 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.864+0000 7fbd498f9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbd4410cd80 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd42ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fbd44108950 0x7fbd44108d50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34256/0 (socket says 192.168.123.105:34256) 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd42ffd640 1 -- 192.168.123.105:0/2735432375 learned_addr learned my addr 192.168.123.105:0/2735432375 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2842529708 0 0) 0x7fbd4410cd80 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbd28003620 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2668502691 0 0) 0x7fbd28003620 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbd4410df60 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fbd2c002e10 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.865+0000 7fbd41ffb640 1 -- 192.168.123.105:0/2735432375 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fbd2c003400 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.866+0000 7fbd498f9640 1 -- 192.168.123.105:0/2735432375 >> v1:192.168.123.105:6789/0 conn(0x7fbd44108950 legacy=0x7fbd44108d50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.866+0000 7fbd498f9640 1 -- 192.168.123.105:0/2735432375 shutdown_connections 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.866+0000 7fbd498f9640 1 -- 192.168.123.105:0/2735432375 >> 192.168.123.105:0/2735432375 conn(0x7fbd4407bdf0 msgr2=0x7fbd4407c240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.866+0000 7fbd498f9640 1 -- 192.168.123.105:0/2735432375 shutdown_connections 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.866+0000 7fbd498f9640 1 -- 192.168.123.105:0/2735432375 wait complete. 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd498f9640 1 Processor -- start 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd498f9640 1 -- start start 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd498f9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbd4419ecd0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd42ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fbd44108950 0x7fbd4419e5c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34272/0 (socket says 192.168.123.105:34272) 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd42ffd640 1 -- 192.168.123.105:0/1595454539 learned_addr learned my addr 192.168.123.105:0/1595454539 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3373351806 0 0) 0x7fbd4419ecd0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbd1c003620 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.867+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 908005446 0 0) 0x7fbd1c003620 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fbd4419ecd0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fbd2c003170 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2899658179 0 0) 0x7fbd4419ecd0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbd4419eea0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fbd4419f1b0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.868+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fbd441a2d40 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.869+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fbd2c004e20 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.869+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fbd2c005dc0 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.869+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 2320845000 0 0) 0x7fbd2c012430 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.869+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbd10005180 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.870+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7fbd2c04e010 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.872+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fbd2c018940 con 0x7fbd44108950 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:47.969+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7fbd10002bf0 con 0x7fbd1c03ebb0 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.015+0000 7fbd488f7640 1 -- 192.168.123.105:0/1595454539 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 3317752739) 0x7fbd10002bf0 con 0x7fbd1c03ebb0 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.017+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 >> v1:192.168.123.105:6800/3845654103 conn(0x7fbd1c03ebb0 legacy=0x7fbd1c041070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.017+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 >> v1:192.168.123.105:6789/0 conn(0x7fbd44108950 legacy=0x7fbd4419e5c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.018+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 shutdown_connections 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.018+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 >> 192.168.123.105:0/1595454539 conn(0x7fbd4407bdf0 msgr2=0x7fbd44105810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.018+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 shutdown_connections 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.018+0000 7fbd498f9640 1 -- 192.168.123.105:0/1595454539 wait complete. 2026-03-10T13:35:48.149 INFO:teuthology.orchestra.run.vm05.stdout:Creating initial admin user... 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$98EURsymtmMB6VCiKvUJWOYWBOdk5Zrwo8YMIRhN4NMrP0/O5Yttu", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773149748, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.277+0000 7fd94533d640 1 Processor -- start 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.277+0000 7fd94533d640 1 -- start start 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.277+0000 7fd94533d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd940109850 con 0x7fd940105420 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93ffff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fd940105420 0x7fd940105820 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34274/0 (socket says 192.168.123.105:34274) 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93ffff640 1 -- 192.168.123.105:0/4221445424 learned_addr learned my addr 192.168.123.105:0/4221445424 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 301863776 0 0) 0x7fd940109850 con 0x7fd940105420 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd920003620 con 0x7fd940105420 2026-03-10T13:35:48.669 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3024541929 0 0) 0x7fd920003620 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.278+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd94010aa30 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fd924002e10 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd93effd640 1 -- 192.168.123.105:0/4221445424 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd9240034c0 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd94533d640 1 -- 192.168.123.105:0/4221445424 >> v1:192.168.123.105:6789/0 conn(0x7fd940105420 legacy=0x7fd940105820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd94533d640 1 -- 192.168.123.105:0/4221445424 shutdown_connections 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd94533d640 1 -- 192.168.123.105:0/4221445424 >> 192.168.123.105:0/4221445424 conn(0x7fd940100bd0 msgr2=0x7fd940102ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd94533d640 1 -- 192.168.123.105:0/4221445424 shutdown_connections 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.279+0000 7fd94533d640 1 -- 192.168.123.105:0/4221445424 wait complete. 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd94533d640 1 Processor -- start 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd94533d640 1 -- start start 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd94533d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd94019eb00 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd93ffff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fd940105420 0x7fd94019e3f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34286/0 (socket says 192.168.123.105:34286) 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd93ffff640 1 -- 192.168.123.105:0/850809012 learned_addr learned my addr 192.168.123.105:0/850809012 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3044021243 0 0) 0x7fd94019eb00 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.280+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd910003620 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1505219728 0 0) 0x7fd910003620 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd94019eb00 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fd924002ff0 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1400197620 0 0) 0x7fd94019eb00 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd94019ecd0 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd94019efe0 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.281+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd9401a2b70 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.282+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd924004f00 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.282+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fd924006210 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.282+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 2320845000 0 0) 0x7fd9240074e0 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.282+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7fd92404d150 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.282+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd94010a510 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.285+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd924018750 con 0x7fd940105420 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.386+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7fd9401a2e60 con 0x7fd91003eae0 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.537+0000 7fd93d7fa640 1 -- 192.168.123.105:0/850809012 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (unknown 0 0 2547587151) 0x7fd9401a2e60 con 0x7fd91003eae0 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 >> v1:192.168.123.105:6800/3845654103 conn(0x7fd91003eae0 legacy=0x7fd910040fa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 >> v1:192.168.123.105:6789/0 conn(0x7fd940105420 legacy=0x7fd94019e3f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 shutdown_connections 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 >> 192.168.123.105:0/850809012 conn(0x7fd940100bd0 msgr2=0x7fd940102fc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 shutdown_connections 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.540+0000 7fd94533d640 1 -- 192.168.123.105:0/850809012 wait complete. 2026-03-10T13:35:48.671 INFO:teuthology.orchestra.run.vm05.stdout:Fetching dashboard port number... 2026-03-10T13:35:49.039 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.806+0000 7f6156d5d640 1 Processor -- start 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6156d5d640 1 -- start start 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6156d5d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6150108f80 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6155d5b640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6150104b50 0x7f6150104f50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34302/0 (socket says 192.168.123.105:34302) 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6155d5b640 1 -- 192.168.123.105:0/881992038 learned_addr learned my addr 192.168.123.105:0/881992038 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 527323545 0 0) 0x7f6150108f80 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.807+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6130003620 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 189894735 0 0) 0x7f6130003620 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f615010a160 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6138002e10 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6138003400 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6154d59640 1 -- 192.168.123.105:0/881992038 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f61380059d0 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.808+0000 7f6156d5d640 1 -- 192.168.123.105:0/881992038 >> v1:192.168.123.105:6789/0 conn(0x7f6150104b50 legacy=0x7f6150104f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- 192.168.123.105:0/881992038 shutdown_connections 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- 192.168.123.105:0/881992038 >> 192.168.123.105:0/881992038 conn(0x7f6150100360 msgr2=0x7f6150102780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- 192.168.123.105:0/881992038 shutdown_connections 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- 192.168.123.105:0/881992038 wait complete. 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 Processor -- start 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- start start 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.809+0000 7f6156d5d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f615019da60 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6155d5b640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f6150104b50 0x7f615019d350 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34308/0 (socket says 192.168.123.105:34308) 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6155d5b640 1 -- 192.168.123.105:0/480123243 learned_addr learned my addr 192.168.123.105:0/480123243 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 369526773 0 0) 0x7f615019da60 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f612c003620 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 149526410 0 0) 0x7f612c003620 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f615019da60 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.810+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6138002890 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3554372407 0 0) 0x7f615019da60 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f615019dc30 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6150199e90 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f615019a3d0 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6138002ed0 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.811+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f6138005fb0 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.812+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (unknown 2320845000 0 0) 0x7f6138002a80 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.812+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7f613804d210 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.812+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6150109390 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.815+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6138017a70 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.906+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7f6150109590 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.906+0000 7f6146ffd640 1 -- 192.168.123.105:0/480123243 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (unknown 3713421687 0 83753974) 0x7f6150109590 con 0x7f6150104b50 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 >> v1:192.168.123.105:6800/3845654103 conn(0x7f612c03e740 legacy=0x7f612c040c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 >> v1:192.168.123.105:6789/0 conn(0x7f6150104b50 legacy=0x7f615019d350 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 shutdown_connections 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 >> 192.168.123.105:0/480123243 conn(0x7f6150100360 msgr2=0x7f6150190fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 shutdown_connections 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:48.909+0000 7f6156d5d640 1 -- 192.168.123.105:0/480123243 wait complete. 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:firewalld does not appear to be present 2026-03-10T13:35:49.040 INFO:teuthology.orchestra.run.vm05.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T13:35:49.041 INFO:teuthology.orchestra.run.vm05.stdout:Ceph Dashboard is now available at: 2026-03-10T13:35:49.041 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.042 INFO:teuthology.orchestra.run.vm05.stdout: URL: https://vm05.local:8443/ 2026-03-10T13:35:49.042 INFO:teuthology.orchestra.run.vm05.stdout: User: admin 2026-03-10T13:35:49.042 INFO:teuthology.orchestra.run.vm05.stdout: Password: vwpxf8nakm 2026-03-10T13:35:49.042 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.042 INFO:teuthology.orchestra.run.vm05.stdout:Saving cluster configuration to /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config directory 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='client.14154 v1:192.168.123.105:0/3925643261' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='client.14154 v1:192.168.123.105:0/3925643261' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='client.14162 v1:192.168.123.105:0/1595454539' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='client.14164 v1:192.168.123.105:0/850809012' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480123243' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:35:49.706 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.404+0000 7faf6b17f640 1 Processor -- start 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.404+0000 7faf6b17f640 1 -- start start 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.404+0000 7faf6b17f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7faf64108f60 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.405+0000 7faf6a17d640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7faf64104b90 0x7faf64104f90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34314/0 (socket says 192.168.123.105:34314) 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.405+0000 7faf6a17d640 1 -- 192.168.123.105:0/1930811821 learned_addr learned my addr 192.168.123.105:0/1930811821 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.405+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 307731591 0 0) 0x7faf64108f60 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.405+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7faf4c003540 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1580629782 0 0) 0x7faf4c003540 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faf6410a140 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7faf58002e10 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6917b640 1 -- 192.168.123.105:0/1930811821 <== mon.0 v1:192.168.123.105:6789/0 4 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7faf580034c0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6b17f640 1 -- 192.168.123.105:0/1930811821 >> v1:192.168.123.105:6789/0 conn(0x7faf64104b90 legacy=0x7faf64104f90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6b17f640 1 -- 192.168.123.105:0/1930811821 shutdown_connections 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.406+0000 7faf6b17f640 1 -- 192.168.123.105:0/1930811821 >> 192.168.123.105:0/1930811821 conn(0x7faf640fff40 msgr2=0x7faf64102360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.407+0000 7faf6b17f640 1 -- 192.168.123.105:0/1930811821 shutdown_connections 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.407+0000 7faf6b17f640 1 -- 192.168.123.105:0/1930811821 wait complete. 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.407+0000 7faf6b17f640 1 Processor -- start 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.407+0000 7faf6b17f640 1 -- start start 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.407+0000 7faf6b17f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7faf6419a520 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf6a17d640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7faf64104b90 0x7faf64199e10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34326/0 (socket says 192.168.123.105:34326) 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf6a17d640 1 -- 192.168.123.105:0/653341670 learned_addr learned my addr 192.168.123.105:0/653341670 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2542819316 0 0) 0x7faf6419a520 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7faf38003620 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1579404190 0 0) 0x7faf38003620 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7faf6419a520 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7faf58003270 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3764953943 0 0) 0x7faf6419a520 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faf6419a6f0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.408+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7faf6419aa00 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.409+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7faf6419e590 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.409+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7faf58002a10 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.409+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7faf58005dd0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.409+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1853166531 0 0) 0x7faf580124b0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.410+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7faf5804e270 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.410+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faf34005180 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.413+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7faf580189c0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.545+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7faf34005470 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.549+0000 7faf537fe640 1 -- 192.168.123.105:0/653341670 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v28)=0 set mgr/dashboard/cluster/status v28) ==== 153+0+0 (unknown 1169358022 0 0) 0x7faf580182c0 con 0x7faf64104b90 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.551+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 >> v1:192.168.123.105:6800/3845654103 conn(0x7faf3803ec60 legacy=0x7faf38041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.551+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 >> v1:192.168.123.105:6789/0 conn(0x7faf64104b90 legacy=0x7faf64199e10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.551+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 shutdown_connections 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.552+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 >> 192.168.123.105:0/653341670 conn(0x7faf640fff40 msgr2=0x7faf64108730 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.552+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 shutdown_connections 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:/usr/bin/ceph: stderr 2026-03-10T13:35:49.552+0000 7faf6b17f640 1 -- 192.168.123.105:0/653341670 wait complete. 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: sudo /sbin/cephadm shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: sudo /sbin/cephadm shell 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T13:35:49.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout: ceph telemetry on 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout:For more information see: 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:49.708 INFO:teuthology.orchestra.run.vm05.stdout:Bootstrap complete. 2026-03-10T13:35:49.736 INFO:tasks.cephadm:Fetching config... 2026-03-10T13:35:49.736 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:35:49.736 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T13:35:49.758 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T13:35:49.758 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:35:49.758 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T13:35:49.819 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T13:35:49.819 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:35:49.819 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/keyring of=/dev/stdout 2026-03-10T13:35:49.887 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T13:35:49.887 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:35:49.888 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T13:35:49.946 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T13:35:49.946 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOiSjLYMQpdq6CS2mH43c483nurQgxF4IVVwFK6/SzGc ceph-e063dc72-1c85-11f1-a098-09993c5c5b66' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:35:50.036 INFO:teuthology.orchestra.run.vm05.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOiSjLYMQpdq6CS2mH43c483nurQgxF4IVVwFK6/SzGc ceph-e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:50.052 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:50 vm05 ceph-mon[51512]: mgrmap e12: y(active, since 2s) 2026-03-10T13:35:50.052 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/653341670' entity='client.admin' 2026-03-10T13:35:50.053 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOiSjLYMQpdq6CS2mH43c483nurQgxF4IVVwFK6/SzGc ceph-e063dc72-1c85-11f1-a098-09993c5c5b66' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:35:50.087 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOiSjLYMQpdq6CS2mH43c483nurQgxF4IVVwFK6/SzGc ceph-e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:50.097 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T13:35:50.300 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:35:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.448+0000 7f7474040640 1 -- 192.168.123.105:0/3765585546 >> v1:192.168.123.105:6789/0 conn(0x7f746c1013f0 legacy=0x7f746c1017f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.448+0000 7f7474040640 1 -- 192.168.123.105:0/3765585546 shutdown_connections 2026-03-10T13:35:50.449 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.448+0000 7f7474040640 1 -- 192.168.123.105:0/3765585546 >> 192.168.123.105:0/3765585546 conn(0x7f746c0fbba0 msgr2=0x7f746c0fdfc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7474040640 1 -- 192.168.123.105:0/3765585546 shutdown_connections 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7474040640 1 -- 192.168.123.105:0/3765585546 wait complete. 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7474040640 1 Processor -- start 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7474040640 1 -- start start 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7474040640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f746c19ade0 con 0x7f746c1013f0 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7471db5640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f746c1013f0 0x7f746c196910 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34348/0 (socket says 192.168.123.105:34348) 2026-03-10T13:35:50.451 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.450+0000 7f7471db5640 1 -- 192.168.123.105:0/916184332 learned_addr learned my addr 192.168.123.105:0/916184332 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:50.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.451+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2954306729 0 0) 0x7f746c19ade0 con 0x7f746c1013f0 2026-03-10T13:35:50.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f743c003620 con 0x7f746c1013f0 2026-03-10T13:35:50.452 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 626563287 0 0) 0x7f743c003620 con 0x7f746c1013f0 2026-03-10T13:35:50.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f746c19ade0 con 0x7f746c1013f0 2026-03-10T13:35:50.453 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f7460002d90 con 0x7f746c1013f0 2026-03-10T13:35:50.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2802377544 0 0) 0x7f746c19ade0 con 0x7f746c1013f0 2026-03-10T13:35:50.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f746c197020 con 0x7f746c1013f0 2026-03-10T13:35:50.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f7474040640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f746c197330 con 0x7f746c1013f0 2026-03-10T13:35:50.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.452+0000 7f7474040640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f746c1ac6a0 con 0x7f746c1013f0 2026-03-10T13:35:50.456 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.453+0000 7f7474040640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f746c10a990 con 0x7f746c1013f0 2026-03-10T13:35:50.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.457+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f74600033e0 con 0x7f746c1013f0 2026-03-10T13:35:50.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.457+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f7460004c50 con 0x7f746c1013f0 2026-03-10T13:35:50.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.457+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1853166531 0 0) 0x7f7460004ed0 con 0x7f746c1013f0 2026-03-10T13:35:50.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.457+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7f746004e190 con 0x7f746c1013f0 2026-03-10T13:35:50.458 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.457+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f746004e5e0 con 0x7f746c1013f0 2026-03-10T13:35:50.568 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.567+0000 7f7474040640 1 -- 192.168.123.105:0/916184332 --> v1:192.168.123.105:6789/0 -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7f746c10aba0 con 0x7f746c1013f0 2026-03-10T13:35:50.573 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.571+0000 7f745affd640 1 -- 192.168.123.105:0/916184332 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (unknown 3028693289 0 0) 0x7f7460018840 con 0x7f746c1013f0 2026-03-10T13:35:50.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 >> v1:192.168.123.105:6800/3845654103 conn(0x7f743c03ed20 legacy=0x7f743c0411e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:50.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 >> v1:192.168.123.105:6789/0 conn(0x7f746c1013f0 legacy=0x7f746c196910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:50.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 shutdown_connections 2026-03-10T13:35:50.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 >> 192.168.123.105:0/916184332 conn(0x7f746c0fbba0 msgr2=0x7f746c108920 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:50.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 shutdown_connections 2026-03-10T13:35:50.577 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:50.576+0000 7f7458ff9640 1 -- 192.168.123.105:0/916184332 wait complete. 2026-03-10T13:35:50.731 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T13:35:50.731 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T13:35:50.935 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.079+0000 7fa933242640 1 -- 192.168.123.105:0/719320902 >> v1:192.168.123.105:6789/0 conn(0x7fa92c0770a0 legacy=0x7fa92c075500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.079+0000 7fa933242640 1 -- 192.168.123.105:0/719320902 shutdown_connections 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.079+0000 7fa933242640 1 -- 192.168.123.105:0/719320902 >> 192.168.123.105:0/719320902 conn(0x7fa92c0fd9c0 msgr2=0x7fa92c0ffe20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.080+0000 7fa933242640 1 -- 192.168.123.105:0/719320902 shutdown_connections 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.080+0000 7fa933242640 1 -- 192.168.123.105:0/719320902 wait complete. 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.080+0000 7fa933242640 1 Processor -- start 2026-03-10T13:35:51.080 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.080+0000 7fa933242640 1 -- start start 2026-03-10T13:35:51.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.080+0000 7fa933242640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa92c19b570 con 0x7fa92c0770a0 2026-03-10T13:35:51.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa930fb7640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fa92c0770a0 0x7fa92c19ae60 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34364/0 (socket says 192.168.123.105:34364) 2026-03-10T13:35:51.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa930fb7640 1 -- 192.168.123.105:0/872138806 learned_addr learned my addr 192.168.123.105:0/872138806 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:51.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3262392234 0 0) 0x7fa92c19b570 con 0x7fa92c0770a0 2026-03-10T13:35:51.081 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa8fc003620 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3159189461 0 0) 0x7fa8fc003620 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa92c19b570 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fa914003170 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 18698084 0 0) 0x7fa92c19b570 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa92c19b740 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa92c19ba50 con 0x7fa92c0770a0 2026-03-10T13:35:51.082 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.081+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa92c19f5e0 con 0x7fa92c0770a0 2026-03-10T13:35:51.083 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.083+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa9140033d0 con 0x7fa92c0770a0 2026-03-10T13:35:51.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.083+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fa914004d10 con 0x7fa92c0770a0 2026-03-10T13:35:51.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.083+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1853166531 0 0) 0x7fa914004f90 con 0x7fa92c0770a0 2026-03-10T13:35:51.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.083+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa8f8005180 con 0x7fa92c0770a0 2026-03-10T13:35:51.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.086+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7fa91404e2d0 con 0x7fa92c0770a0 2026-03-10T13:35:51.087 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.086+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa914018980 con 0x7fa92c0770a0 2026-03-10T13:35:51.182 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.181+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7fa8f8002bf0 con 0x7fa8fc03e9b0 2026-03-10T13:35:51.187 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.186+0000 7fa921ffb640 1 -- 192.168.123.105:0/872138806 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (unknown 0 0 0) 0x7fa8f8002bf0 con 0x7fa8fc03e9b0 2026-03-10T13:35:51.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.192+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 >> v1:192.168.123.105:6800/3845654103 conn(0x7fa8fc03e9b0 legacy=0x7fa8fc040e70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:51.193 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.192+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 >> v1:192.168.123.105:6789/0 conn(0x7fa92c0770a0 legacy=0x7fa92c19ae60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:51.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.194+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 shutdown_connections 2026-03-10T13:35:51.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.194+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 >> 192.168.123.105:0/872138806 conn(0x7fa92c0fd9c0 msgr2=0x7fa92c0ffba0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:51.195 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.194+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 shutdown_connections 2026-03-10T13:35:51.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.195+0000 7fa933242640 1 -- 192.168.123.105:0/872138806 wait complete. 2026-03-10T13:35:51.366 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-10T13:35:51.366 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:35:51.366 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-10T13:35:51.381 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:35:51.381 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:51.437 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-10T13:35:51.437 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch host add vm09 2026-03-10T13:35:51.629 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/916184332' entity='client.admin' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='client.14172 v1:192.168.123.105:0/872138806' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:35:51.713 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:51 vm05 ceph-mon[51512]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:35:51.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.777+0000 7fab5d3d0640 1 -- 192.168.123.105:0/3280278174 >> v1:192.168.123.105:6789/0 conn(0x7fab580734c0 legacy=0x7fab580738c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:51.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.777+0000 7fab5d3d0640 1 -- 192.168.123.105:0/3280278174 shutdown_connections 2026-03-10T13:35:51.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.777+0000 7fab5d3d0640 1 -- 192.168.123.105:0/3280278174 >> 192.168.123.105:0/3280278174 conn(0x7fab5806eee0 msgr2=0x7fab58071340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.777+0000 7fab5d3d0640 1 -- 192.168.123.105:0/3280278174 shutdown_connections 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.777+0000 7fab5d3d0640 1 -- 192.168.123.105:0/3280278174 wait complete. 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.778+0000 7fab5d3d0640 1 Processor -- start 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.778+0000 7fab5d3d0640 1 -- start start 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.778+0000 7fab5d3d0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fab5807e9a0 con 0x7fab5807ae50 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.779+0000 7fab56ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fab5807ae50 0x7fab5807d280 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34396/0 (socket says 192.168.123.105:34396) 2026-03-10T13:35:51.780 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.779+0000 7fab56ffd640 1 -- 192.168.123.105:0/1533102281 learned_addr learned my addr 192.168.123.105:0/1533102281 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:51.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.781+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1510753795 0 0) 0x7fab5807e9a0 con 0x7fab5807ae50 2026-03-10T13:35:51.781 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.781+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fab38003620 con 0x7fab5807ae50 2026-03-10T13:35:51.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.781+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2036560458 0 0) 0x7fab38003620 con 0x7fab5807ae50 2026-03-10T13:35:51.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.781+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fab5807e9a0 con 0x7fab5807ae50 2026-03-10T13:35:51.782 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.781+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fab48003170 con 0x7fab5807ae50 2026-03-10T13:35:51.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1467384552 0 0) 0x7fab5807e9a0 con 0x7fab5807ae50 2026-03-10T13:35:51.783 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fab5807fb80 con 0x7fab5807ae50 2026-03-10T13:35:51.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fab5807eb70 con 0x7fab5807ae50 2026-03-10T13:35:51.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fab5807f0b0 con 0x7fab5807ae50 2026-03-10T13:35:51.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fab48003c50 con 0x7fab5807ae50 2026-03-10T13:35:51.784 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fab48004b50 con 0x7fab5807ae50 2026-03-10T13:35:51.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.783+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fab5807a4f0 con 0x7fab5807ae50 2026-03-10T13:35:51.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.787+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 12) ==== 50225+0+0 (unknown 1853166531 0 0) 0x7fab48004d10 con 0x7fab5807ae50 2026-03-10T13:35:51.787 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.787+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7fab4804e0c0 con 0x7fab5807ae50 2026-03-10T13:35:51.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.790+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fab480186f0 con 0x7fab5807ae50 2026-03-10T13:35:51.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:51.886+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}) -- 0x7fab5807f500 con 0x7fab3803ec80 2026-03-10T13:35:52.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:52.916+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7fab4804c960 con 0x7fab5807ae50 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: from='client.14174 v1:192.168.123.105:0/1533102281' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:52 vm05 ceph-mon[51512]: Deploying cephadm binary to vm09 2026-03-10T13:35:53.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.340+0000 7fab37fff640 1 -- 192.168.123.105:0/1533102281 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (unknown 0 0 530711637) 0x7fab5807f500 con 0x7fab3803ec80 2026-03-10T13:35:53.340 INFO:teuthology.orchestra.run.vm05.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 >> v1:192.168.123.105:6800/3845654103 conn(0x7fab3803ec80 legacy=0x7fab38041120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 >> v1:192.168.123.105:6789/0 conn(0x7fab5807ae50 legacy=0x7fab5807d280 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 shutdown_connections 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 >> 192.168.123.105:0/1533102281 conn(0x7fab5806eee0 msgr2=0x7fab580713c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 shutdown_connections 2026-03-10T13:35:53.343 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.342+0000 7fab5d3d0640 1 -- 192.168.123.105:0/1533102281 wait complete. 2026-03-10T13:35:53.496 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch host ls --format=json 2026-03-10T13:35:53.662 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.790+0000 7fc56a8e8640 1 -- 192.168.123.105:0/3184575330 >> v1:192.168.123.105:6789/0 conn(0x7fc56410a570 legacy=0x7fc56410a950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- 192.168.123.105:0/3184575330 shutdown_connections 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- 192.168.123.105:0/3184575330 >> 192.168.123.105:0/3184575330 conn(0x7fc564100180 msgr2=0x7fc5641025a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- 192.168.123.105:0/3184575330 shutdown_connections 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- 192.168.123.105:0/3184575330 wait complete. 2026-03-10T13:35:53.791 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 Processor -- start 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- start start 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.791+0000 7fc56a8e8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc56419b7c0 con 0x7fc56410a570 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc563fff640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fc56410a570 0x7fc56419b0b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34412/0 (socket says 192.168.123.105:34412) 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc563fff640 1 -- 192.168.123.105:0/2125508475 learned_addr learned my addr 192.168.123.105:0/2125508475 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 130547036 0 0) 0x7fc56419b7c0 con 0x7fc56410a570 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc538003620 con 0x7fc56410a570 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1817377445 0 0) 0x7fc538003620 con 0x7fc56410a570 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc56419b7c0 con 0x7fc56410a570 2026-03-10T13:35:53.792 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fc550002cd0 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 945587867 0 0) 0x7fc56419b7c0 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc56419b990 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fc56419bc80 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fc550003aa0 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.792+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fc550004de0 con 0x7fc56410a570 2026-03-10T13:35:53.793 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.793+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7fc550003650 con 0x7fc56410a570 2026-03-10T13:35:53.794 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.793+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fc5641a3e60 con 0x7fc56410a570 2026-03-10T13:35:53.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.794+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc56419c410 con 0x7fc56410a570 2026-03-10T13:35:53.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.794+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7fc55004d6a0 con 0x7fc56410a570 2026-03-10T13:35:53.797 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.797+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fc550017c10 con 0x7fc56410a570 2026-03-10T13:35:53.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.898+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7fc56410bde0 con 0x7fc53803ed00 2026-03-10T13:35:53.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.899+0000 7fc5617fa640 1 -- 192.168.123.105:0/2125508475 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (unknown 0 0 870659353) 0x7fc56410bde0 con 0x7fc53803ed00 2026-03-10T13:35:53.900 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:35:53.900 INFO:teuthology.orchestra.run.vm05.stdout:[{"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-10T13:35:53.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 >> v1:192.168.123.105:6800/3845654103 conn(0x7fc53803ed00 legacy=0x7fc5380411c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:53.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 >> v1:192.168.123.105:6789/0 conn(0x7fc56410a570 legacy=0x7fc56419b0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:53.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 shutdown_connections 2026-03-10T13:35:53.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 >> 192.168.123.105:0/2125508475 conn(0x7fc564100180 msgr2=0x7fc56410b6d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:53.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 shutdown_connections 2026-03-10T13:35:53.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:53.902+0000 7fc56a8e8640 1 -- 192.168.123.105:0/2125508475 wait complete. 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: mgrmap e13: y(active, since 6s) 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: Added host vm09 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:54.049 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:54.077 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T13:35:54.077 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd crush tunables default 2026-03-10T13:35:54.242 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.360+0000 7f25fb4ef640 1 -- 192.168.123.105:0/2348196968 >> v1:192.168.123.105:6789/0 conn(0x7f25f4103050 legacy=0x7f25f41005d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.360+0000 7f25fb4ef640 1 -- 192.168.123.105:0/2348196968 shutdown_connections 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.360+0000 7f25fb4ef640 1 -- 192.168.123.105:0/2348196968 >> 192.168.123.105:0/2348196968 conn(0x7f25f40fc520 msgr2=0x7f25f40fe940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.360+0000 7f25fb4ef640 1 -- 192.168.123.105:0/2348196968 shutdown_connections 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.360+0000 7f25fb4ef640 1 -- 192.168.123.105:0/2348196968 wait complete. 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.361+0000 7f25fb4ef640 1 Processor -- start 2026-03-10T13:35:54.361 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.361+0000 7f25fb4ef640 1 -- start start 2026-03-10T13:35:54.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.361+0000 7f25fb4ef640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f25f41079e0 con 0x7f25f4103050 2026-03-10T13:35:54.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.361+0000 7f25f9264640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f25f4103050 0x7f25f4103a80 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34426/0 (socket says 192.168.123.105:34426) 2026-03-10T13:35:54.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.361+0000 7f25f9264640 1 -- 192.168.123.105:0/154252648 learned_addr learned my addr 192.168.123.105:0/154252648 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:35:54.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1384962014 0 0) 0x7f25f41079e0 con 0x7f25f4103050 2026-03-10T13:35:54.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f25c8003620 con 0x7f25f4103050 2026-03-10T13:35:54.364 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3048576624 0 0) 0x7f25c8003620 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f25f41079e0 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f25e8003170 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1498662374 0 0) 0x7f25f41079e0 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f25f4107bb0 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f25f4104190 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f25f4104650 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f25e8004420 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.362+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f25e8004f00 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.363+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f25e80050c0 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.363+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (unknown 238952526 0 0) 0x7f25e804e1e0 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.363+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f25bc005180 con 0x7f25f4103050 2026-03-10T13:35:54.367 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.366+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f25e8018810 con 0x7f25f4103050 2026-03-10T13:35:54.466 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.465+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7f25bc005470 con 0x7f25f4103050 2026-03-10T13:35:54.918 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.918+0000 7f25e27fc640 1 -- 192.168.123.105:0/154252648 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (unknown 3126668360 0 0) 0x7f25e8005450 con 0x7f25f4103050 2026-03-10T13:35:54.918 INFO:teuthology.orchestra.run.vm05.stderr:adjusted tunables profile to default 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.920+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 >> v1:192.168.123.105:6800/3845654103 conn(0x7f25c8050130 legacy=0x7f25c80525f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.920+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 >> v1:192.168.123.105:6789/0 conn(0x7f25f4103050 legacy=0x7f25f4103a80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.921+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 shutdown_connections 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.921+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 >> 192.168.123.105:0/154252648 conn(0x7f25f40fc520 msgr2=0x7f25f40fcc20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.921+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 shutdown_connections 2026-03-10T13:35:54.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:35:54.921+0000 7f25fb4ef640 1 -- 192.168.123.105:0/154252648 wait complete. 2026-03-10T13:35:55.069 INFO:tasks.cephadm:Adding mon.a on vm05 2026-03-10T13:35:55.069 INFO:tasks.cephadm:Adding mon.c on vm05 2026-03-10T13:35:55.069 INFO:tasks.cephadm:Adding mon.b on vm09 2026-03-10T13:35:55.069 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply mon '3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b' 2026-03-10T13:35:55.260 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T13:35:55.308 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T13:35:55.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:54 vm05 ceph-mon[51512]: from='client.14176 v1:192.168.123.105:0/2125508475' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:35:55.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/154252648' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:35:55.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:54 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:55.453 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.452+0000 7fda26840640 1 -- 192.168.123.109:0/4074679225 >> v1:192.168.123.105:6789/0 conn(0x7fda180a4570 legacy=0x7fda180a4970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:55.453 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.453+0000 7fda26840640 1 -- 192.168.123.109:0/4074679225 shutdown_connections 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.453+0000 7fda26840640 1 -- 192.168.123.109:0/4074679225 >> 192.168.123.109:0/4074679225 conn(0x7fda1809fbc0 msgr2=0x7fda180a2020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.454+0000 7fda26840640 1 -- 192.168.123.109:0/4074679225 shutdown_connections 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.454+0000 7fda26840640 1 -- 192.168.123.109:0/4074679225 wait complete. 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda26840640 1 Processor -- start 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda26840640 1 -- start start 2026-03-10T13:35:55.454 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda26840640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fda180b4790 con 0x7fda180a4570 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda2583e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fda180a4570 0x7fda1814edb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:32838/0 (socket says 192.168.123.109:32838) 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda2583e640 1 -- 192.168.123.109:0/1197969755 learned_addr learned my addr 192.168.123.109:0/1197969755 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1473611920 0 0) 0x7fda180b4790 con 0x7fda180a4570 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.455+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd9f4003620 con 0x7fda180a4570 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1729731058 0 0) 0x7fd9f4003620 con 0x7fda180a4570 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fda180b4790 con 0x7fda180a4570 2026-03-10T13:35:55.455 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fda1c004500 con 0x7fda180a4570 2026-03-10T13:35:55.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3143230621 0 0) 0x7fda180b4790 con 0x7fda180a4570 2026-03-10T13:35:55.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fda180b4960 con 0x7fda180a4570 2026-03-10T13:35:55.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fda181504d0 con 0x7fda180a4570 2026-03-10T13:35:55.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.456+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fda18150870 con 0x7fda180a4570 2026-03-10T13:35:55.456 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.457+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fda1c0029a0 con 0x7fda180a4570 2026-03-10T13:35:55.457 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.457+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7fda1c004cf0 con 0x7fda180a4570 2026-03-10T13:35:55.457 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.457+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7fda1c004fd0 con 0x7fda180a4570 2026-03-10T13:35:55.457 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.457+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fda180ab300 con 0x7fda180a4570 2026-03-10T13:35:55.458 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.458+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7fda1c04e290 con 0x7fda180a4570 2026-03-10T13:35:55.462 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.462+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fda1c0188c0 con 0x7fda180a4570 2026-03-10T13:35:55.565 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.565+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}) -- 0x7fda18003480 con 0x7fd9f403ed00 2026-03-10T13:35:55.572 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.572+0000 7fda0effd640 1 -- 192.168.123.109:0/1197969755 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 3265049985) 0x7fda18003480 con 0x7fd9f403ed00 2026-03-10T13:35:55.572 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.575+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 >> v1:192.168.123.105:6800/3845654103 conn(0x7fd9f403ed00 legacy=0x7fd9f40411c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.575+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 >> v1:192.168.123.105:6789/0 conn(0x7fda180a4570 legacy=0x7fda1814edb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.575+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 shutdown_connections 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.575+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 >> 192.168.123.109:0/1197969755 conn(0x7fda1809fbc0 msgr2=0x7fda180a2020 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.576+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 shutdown_connections 2026-03-10T13:35:55.575 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:55.576+0000 7fda26840640 1 -- 192.168.123.109:0/1197969755 wait complete. 2026-03-10T13:35:55.728 DEBUG:teuthology.orchestra.run.vm05:mon.c> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.c.service 2026-03-10T13:35:55.730 DEBUG:teuthology.orchestra.run.vm09:mon.b> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.b.service 2026-03-10T13:35:55.731 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T13:35:55.731 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph mon dump -f json 2026-03-10T13:35:55.962 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T13:35:56.006 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T13:35:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/154252648' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:35:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:55 vm05 ceph-mon[51512]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:35:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:55 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:56.160 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.160+0000 7f50e62da640 1 -- 192.168.123.109:0/4264476234 >> v1:192.168.123.105:6789/0 conn(0x7f50e00770a0 legacy=0x7f50e0075500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.161+0000 7f50e62da640 1 -- 192.168.123.109:0/4264476234 shutdown_connections 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.161+0000 7f50e62da640 1 -- 192.168.123.109:0/4264476234 >> 192.168.123.109:0/4264476234 conn(0x7f50e00fd820 msgr2=0x7f50e00ffc40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.161+0000 7f50e62da640 1 -- 192.168.123.109:0/4264476234 shutdown_connections 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.161+0000 7f50e62da640 1 -- 192.168.123.109:0/4264476234 wait complete. 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.161+0000 7f50e62da640 1 Processor -- start 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50e62da640 1 -- start start 2026-03-10T13:35:56.161 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50e62da640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f50e019f820 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50e52d8640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f50e00770a0 0x7f50e019f110 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:58738/0 (socket says 192.168.123.109:58738) 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50e52d8640 1 -- 192.168.123.109:0/3736169156 learned_addr learned my addr 192.168.123.109:0/3736169156 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2550176389 0 0) 0x7f50e019f820 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f50bc003620 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 449733865 0 0) 0x7f50bc003620 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f50e019f820 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.162+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f50dc0048d0 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 69160002 0 0) 0x7f50e019f820 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f50e019f9f0 con 0x7f50e00770a0 2026-03-10T13:35:56.162 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f50e019fd00 con 0x7f50e00770a0 2026-03-10T13:35:56.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f50e01a3890 con 0x7f50e00770a0 2026-03-10T13:35:56.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f50dc002d70 con 0x7f50e00770a0 2026-03-10T13:35:56.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.163+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f50dc005150 con 0x7f50e00770a0 2026-03-10T13:35:56.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.164+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f50dc0036b0 con 0x7f50e00770a0 2026-03-10T13:35:56.163 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.164+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f50e010edc0 con 0x7f50e00770a0 2026-03-10T13:35:56.164 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.164+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7f50dc04d8a0 con 0x7f50e00770a0 2026-03-10T13:35:56.167 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.167+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f50dc017f30 con 0x7f50e00770a0 2026-03-10T13:35:56.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.308+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f50e010f000 con 0x7f50e00770a0 2026-03-10T13:35:56.309 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.309+0000 7f50ce7fc640 1 -- 192.168.123.109:0/3736169156 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 1 v1) ==== 95+0+699 (unknown 2237029548 0 2133020574) 0x7f50dc04c140 con 0x7f50e00770a0 2026-03-10T13:35:56.310 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:35:56.310 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","modified":"2026-03-10T13:35:21.154333Z","created":"2026-03-10T13:35:21.154333Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T13:35:56.310 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 >> v1:192.168.123.105:6800/3845654103 conn(0x7f50bc03e960 legacy=0x7f50bc040e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 >> v1:192.168.123.105:6789/0 conn(0x7f50e00770a0 legacy=0x7f50e019f110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 shutdown_connections 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 >> 192.168.123.109:0/3736169156 conn(0x7f50e00fd820 msgr2=0x7f50e00ff9c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 shutdown_connections 2026-03-10T13:35:56.312 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:56.312+0000 7f50e62da640 1 -- 192.168.123.109:0/3736169156 wait complete. 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='client.14180 v1:192.168.123.109:0/1197969755' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: Saving service mon spec with placement vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b;count:3 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/3736169156' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:35:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:57.482 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T13:35:57.482 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph mon dump -f json 2026-03-10T13:35:57.718 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.148+0000 7f53c1865640 1 -- 192.168.123.109:0/2731797874 >> v1:192.168.123.105:6789/0 conn(0x7f53bc073c40 legacy=0x7f53bc074020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.148+0000 7f53c1865640 1 -- 192.168.123.109:0/2731797874 shutdown_connections 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.148+0000 7f53c1865640 1 -- 192.168.123.109:0/2731797874 >> 192.168.123.109:0/2731797874 conn(0x7f53bc06d1e0 msgr2=0x7f53bc06d5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.146+0000 7f53bb7fe640 1 -- 192.168.123.109:0/2731797874 <== mon.0 v1:192.168.123.105:6789/0 5 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f53ac004800 con 0x7f53bc073c40 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.148+0000 7f53c1865640 1 -- 192.168.123.109:0/2731797874 shutdown_connections 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.148+0000 7f53c1865640 1 -- 192.168.123.109:0/2731797874 wait complete. 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.150+0000 7f53c1865640 1 Processor -- start 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.150+0000 7f53c1865640 1 -- start start 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.150+0000 7f53c1865640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f53bc086430 con 0x7f53bc0824d0 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.150+0000 7f53c0863640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f53bc0824d0 0x7f53bc0828b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:58752/0 (socket says 192.168.123.109:58752) 2026-03-10T13:35:58.150 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.150+0000 7f53c0863640 1 -- 192.168.123.109:0/1049391118 learned_addr learned my addr 192.168.123.109:0/1049391118 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 495529613 0 0) 0x7f53bc086430 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5390003620 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3881119921 0 0) 0x7f5390003620 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f53bc086430 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f53ac003170 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2739362985 0 0) 0x7f53bc086430 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.151+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f53bc082fc0 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.152+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f53bc0832b0 con 0x7f53bc0824d0 2026-03-10T13:35:58.151 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.152+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f53bc1bde10 con 0x7f53bc0824d0 2026-03-10T13:35:58.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.152+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f53ac002830 con 0x7f53bc0824d0 2026-03-10T13:35:58.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.152+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 170+0+0 (unknown 4183727868 0 0) 0x7f53ac004dd0 con 0x7f53bc0824d0 2026-03-10T13:35:58.152 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.153+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f53ac0050b0 con 0x7f53bc0824d0 2026-03-10T13:35:58.153 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.153+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7f53ac04e0a0 con 0x7f53bc0824d0 2026-03-10T13:35:58.155 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.153+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f53bc074e50 con 0x7f53bc0824d0 2026-03-10T13:35:58.156 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.157+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f53ac0186d0 con 0x7f53bc0824d0 2026-03-10T13:35:58.303 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.301+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_map magic: 0 ==== 239+0+0 (unknown 1527143693 0 0) 0x7f53ac017da0 con 0x7f53bc0824d0 2026-03-10T13:35:58.316 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:35:58.316+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f53bc078f60 con 0x7f53bc0824d0 2026-03-10T13:35:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:35:57 vm05 ceph-mon[51512]: Deploying daemon mon.b on vm09 2026-03-10T13:35:59.371 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 podman[58922]: 2026-03-10 13:35:59.340201429 +0000 UTC m=+0.017082529 container create fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:35:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 podman[58922]: 2026-03-10 13:35:59.37467334 +0000 UTC m=+0.051554440 container init fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 podman[58922]: 2026-03-10 13:35:59.3802916 +0000 UTC m=+0.057172700 container start fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0) 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 bash[58922]: fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 podman[58922]: 2026-03-10 13:35:59.33277933 +0000 UTC m=+0.009660441 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 systemd[1]: Started Ceph mon.c for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: pidfile_write: ignore empty --pid-file 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: load: jerasure load: lrc 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: RocksDB version: 7.9.2 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Git sha 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: DB SUMMARY 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: DB Session ID: LUO57VM5BQO6ADX59MJ0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: CURRENT file: CURRENT 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 476 ; 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.error_if_exists: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.create_if_missing: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.paranoid_checks: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.env: 0x563393c7ddc0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.info_log: 0x5633963365c0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.statistics: (nil) 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.use_fsync: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_log_file_size: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_fallocate: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.use_direct_reads: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.db_log_dir: 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.wal_dir: 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.write_buffer_manager: 0x56339633b900 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:35:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.unordered_write: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.row_cache: None 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.wal_filter: None 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.two_write_queues: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.wal_compression: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.atomic_flush: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.log_readahead_size: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_background_jobs: 2 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_background_compactions: -1 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_subcompactions: 1 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_open_files: -1 2026-03-10T13:35:59.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_background_flushes: -1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Compression algorithms supported: 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kZSTD supported: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kXpressCompression supported: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kBZip2Compression supported: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kLZ4Compression supported: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kZlibCompression supported: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: kSnappyCompression supported: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.merge_operator: 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_filter: None 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633963365a0) 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: cache_index_and_filter_blocks: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: pin_top_level_index_and_filter: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: index_type: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: data_block_index_type: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: index_shortening: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: checksum: 4 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: no_block_cache: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache: 0x56339635b350 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_name: BinnedLRUCache 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_options: 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: capacity : 536870912 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: num_shard_bits : 4 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: strict_capacity_limit : 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: high_pri_pool_ratio: 0.000 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_cache_compressed: (nil) 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: persistent_cache: (nil) 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_size: 4096 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_size_deviation: 10 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_restart_interval: 16 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: index_block_restart_interval: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: metadata_block_size: 4096 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: partition_filters: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: use_delta_encoding: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: filter_policy: bloomfilter 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: whole_key_filtering: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: verify_compression: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: read_amp_bytes_per_bit: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: format_version: 5 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: enable_index_compression: 1 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: block_align: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: max_auto_readahead_size: 262144 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: prepopulate_block_cache: 0 2026-03-10T13:35:59.834 INFO:journalctl@ceph.mon.c.vm05.stdout: initial_auto_readahead_size: 8192 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression: NoCompression 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.num_levels: 7 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.inplace_update_support: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:35:59.835 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.bloom_locality: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.max_successive_merges: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.ttl: 2592000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enable_blob_files: false 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.min_blob_size: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 060daddd-eb46-4d72-a67c-6c10a4ad4457 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149759408528, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149759409145, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 488, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 366, "raw_average_value_size": 73, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773149759, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "060daddd-eb46-4d72-a67c-6c10a4ad4457", "db_session_id": "LUO57VM5BQO6ADX59MJ0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773149759409206, "job": 1, "event": "recovery_finished"} 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56339635ce00 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: DB pointer 0x563396476000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: ** DB Stats ** 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: L0 1/0 1.57 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Sum 1/0 1.57 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: ** Compaction Stats [default] ** 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.5 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:35:59.836 INFO:journalctl@ceph.mon.c.vm05.stdout: Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: Block cache BinnedLRUCache@0x56339635b350#6 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: using public_addrv v1:192.168.123.105:6790/0 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: starting mon.c rank -1 at public addrs v1:192.168.123.105:6790/0 at bind addrs v1:192.168.123.105:6790/0 mon_data /var/lib/ceph/mon/ceph-c fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(???) e0 preinit fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).mds e1 new map 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: e1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: btime 2026-03-10T13:35:24:005627+0000 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: legacy client fscid: -1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout: No filesystems configured 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mkfs e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: monmap epoch 1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: last_changed 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: min_mon_release 19 (squid) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: election_strategy: 1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: fsmap 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e1: no daemons active 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1835639079' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3812143385' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3812143385' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2446548672' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: monmap epoch 1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: last_changed 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: min_mon_release 19 (squid) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: election_strategy: 1 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: fsmap 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e1: no daemons active 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3692721789' entity='client.admin' 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3041849910' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1491529870' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Activating manager daemon y 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e2: y(active, starting, since 0.00378319s) 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:59.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Manager daemon y is now available 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14100 v1:192.168.123.105:0/2687028627' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e3: y(active, since 1.00882s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4181669056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e4: y(active, since 2s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3561067797' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3695914686' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3695914686' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e5: y(active, since 3s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3249480924' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Active manager daemon y restarted 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Activating manager daemon y 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e6: y(active, starting, since 0.00762872s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Manager daemon y is now available 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Found migration_current of "None". Setting to last migration. 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e7: y(active, since 1.01006s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14122 v1:192.168.123.105:0/2045346811' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14122 v1:192.168.123.105:0/2045346811' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14130 v1:192.168.123.105:0/1844398852' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:37] ENGINE Bus STARTING 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:37] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:37] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:37] ENGINE Bus STARTED 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:37] ENGINE Client ('192.168.123.105', 44358) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14132 v1:192.168.123.105:0/466531122' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14134 v1:192.168.123.105:0/3592973527' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Generating ssh key... 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e8: y(active, since 2s) 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14136 v1:192.168.123.105:0/866114034' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14138 v1:192.168.123.105:0/1117278794' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Deploying cephadm binary to vm05 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Added host vm05 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14140 v1:192.168.123.105:0/4293061812' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Saving service mon spec with placement count:5 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14142 v1:192.168.123.105:0/178297727' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Saving service mgr spec with placement count:2 2026-03-10T13:35:59.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3950193774' entity='client.admin' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/463622433' entity='client.admin' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3028328979' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14118 v1:192.168.123.105:0/3540588486' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3028328979' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e9: y(active, since 7s) 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1095205776' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Active manager daemon y restarted 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Activating manager daemon y 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e10: y(active, starting, since 0.00717009s) 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Manager daemon y is now available 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:47] ENGINE Bus STARTING 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:47] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:47] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:47] ENGINE Bus STARTED 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: [10/Mar/2026:13:35:47] ENGINE Client ('192.168.123.105', 35700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e11: y(active, since 1.0106s) 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14154 v1:192.168.123.105:0/3925643261' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14154 v1:192.168.123.105:0/3925643261' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14162 v1:192.168.123.105:0/1595454539' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14164 v1:192.168.123.105:0/850809012' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480123243' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e12: y(active, since 2s) 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/653341670' entity='client.admin' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/916184332' entity='client.admin' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14172 v1:192.168.123.105:0/872138806' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:35:59.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14174 v1:192.168.123.105:0/1533102281' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Deploying cephadm binary to vm09 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mgrmap e13: y(active, since 6s) 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Added host vm09 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14176 v1:192.168.123.105:0/2125508475' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/154252648' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/154252648' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.14180 v1:192.168.123.109:0/1197969755' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Saving service mon spec with placement vm05:[v1:192.168.123.105:6789]=a;vm05:[v1:192.168.123.105:6790]=c;vm09:[v1:192.168.123.109:6789]=b;count:3 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/3736169156' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: Deploying daemon mon.b on vm09 2026-03-10T13:35:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:35:59 vm05 ceph-mon[58955]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T13:36:03.313 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.313+0000 7f53b9ffb640 1 -- 192.168.123.109:0/1049391118 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+923 (unknown 3557084514 0 3520362642) 0x7f53ac04d930 con 0x7f53bc0824d0 2026-03-10T13:36:03.313 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:36:03.313 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","modified":"2026-03-10T13:35:58.298421Z","created":"2026-03-10T13:35:21.154333Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T13:36:03.313 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 >> v1:192.168.123.105:6800/3845654103 conn(0x7f539003ed20 legacy=0x7f53900411e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 >> v1:192.168.123.105:6789/0 conn(0x7f53bc0824d0 legacy=0x7f53bc0828b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 shutdown_connections 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 >> 192.168.123.109:0/1049391118 conn(0x7f53bc06d1e0 msgr2=0x7f53bc076370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 shutdown_connections 2026-03-10T13:36:03.315 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:03.315+0000 7f53c1865640 1 -- 192.168.123.109:0/1049391118 wait complete. 2026-03-10T13:36:03.665 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: Deploying daemon mon.c on vm05 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: mon.a calling monitor election 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/1049391118' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: mon.b calling monitor election 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: monmap epoch 2 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: last_changed 2026-03-10T13:35:58.298421+0000 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: min_mon_release 19 (squid) 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: election_strategy: 1 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: fsmap 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: mgrmap e13: y(active, since 16s) 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: overall HEALTH_OK 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:03.666 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:04.479 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T13:36:04.479 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph mon dump -f json 2026-03-10T13:36:04.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:36:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:36:04.292+0000 7ff7d12b6640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T13:36:04.628 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:36:08.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.459+0000 7f8cd8789640 1 -- 192.168.123.109:0/1082657250 >> v1:192.168.123.105:6789/0 conn(0x7f8cb0003660 legacy=0x7f8cb0005af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:08.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.460+0000 7f8cd8789640 1 -- 192.168.123.109:0/1082657250 shutdown_connections 2026-03-10T13:36:08.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.460+0000 7f8cd8789640 1 -- 192.168.123.109:0/1082657250 >> 192.168.123.109:0/1082657250 conn(0x7f8cd0100250 msgr2=0x7f8cd0102670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:08.460 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.460+0000 7f8cd8789640 1 -- 192.168.123.109:0/1082657250 shutdown_connections 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.460+0000 7f8cd8789640 1 -- 192.168.123.109:0/1082657250 wait complete. 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd8789640 1 Processor -- start 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd8789640 1 -- start start 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd8789640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8cd01b81e0 con 0x7f8cd01106b0 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd8789640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8cd01b93c0 con 0x7f8cb0003660 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd8789640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8cd01ba5c0 con 0x7f8cd0110b20 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd64fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f8cb0003660 0x7f8cd010ffa0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:44320/0 (socket says 192.168.123.109:44320) 2026-03-10T13:36:08.461 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.461+0000 7f8cd64fe640 1 -- 192.168.123.109:0/837882518 learned_addr learned my addr 192.168.123.109:0/837882518 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:36:08.463 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.463+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 247547447 0 0) 0x7f8cd01b81e0 con 0x7f8cd01106b0 2026-03-10T13:36:08.463 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.463+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8ca4003620 con 0x7f8cd01106b0 2026-03-10T13:36:08.463 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.463+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1970798965 0 0) 0x7f8ca4003620 con 0x7f8cd01106b0 2026-03-10T13:36:08.463 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.464+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8cd01b81e0 con 0x7f8cd01106b0 2026-03-10T13:36:08.463 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.464+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8cc0002ff0 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4190130445 0 0) 0x7f8cd01b81e0 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 >> v1:192.168.123.105:6790/0 conn(0x7f8cd0110b20 legacy=0x7f8cd01b69a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 >> v1:192.168.123.109:6789/0 conn(0x7f8cb0003660 legacy=0x7f8cd010ffa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8cd01bb7c0 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8cd01ba7f0 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.466+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8cd01bada0 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.467+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8cc0003190 con 0x7f8cd01106b0 2026-03-10T13:36:08.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.467+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8cc00049e0 con 0x7f8cd01106b0 2026-03-10T13:36:08.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.467+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f8cc0011090 con 0x7f8cd01106b0 2026-03-10T13:36:08.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.468+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7f8cc004d090 con 0x7f8cd01106b0 2026-03-10T13:36:08.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.468+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8cd0105cf0 con 0x7f8cd01106b0 2026-03-10T13:36:08.473 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.473+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8cc0017600 con 0x7f8cd01106b0 2026-03-10T13:36:08.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.642+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f8cd010d860 con 0x7f8cd01106b0 2026-03-10T13:36:08.642 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.643+0000 7f8cc77fe640 1 -- 192.168.123.109:0/837882518 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1145 (unknown 409899127 0 2765351517) 0x7f8cc0020130 con 0x7f8cd01106b0 2026-03-10T13:36:08.643 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:36:08.643 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":3,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","modified":"2026-03-10T13:36:03.434471Z","created":"2026-03-10T13:35:21.154333Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6790","nonce":0}]},"addr":"192.168.123.105:6790/0","public_addr":"192.168.123.105:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T13:36:08.643 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 3 2026-03-10T13:36:08.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.645+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 >> v1:192.168.123.105:6800/3845654103 conn(0x7f8ca403eb30 legacy=0x7f8ca4040ff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:08.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.645+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 >> v1:192.168.123.105:6789/0 conn(0x7f8cd01106b0 legacy=0x7f8cd01b3270 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:08.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.646+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 shutdown_connections 2026-03-10T13:36:08.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.646+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 >> 192.168.123.109:0/837882518 conn(0x7f8cd0100250 msgr2=0x7f8cd0100630 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:08.650 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.646+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 shutdown_connections 2026-03-10T13:36:08.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:08.646+0000 7f8cd8789640 1 -- 192.168.123.109:0/837882518 wait complete. 2026-03-10T13:36:08.831 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T13:36:08.831 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph config generate-minimal-conf 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: mon.a calling monitor election 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: mon.b calling monitor election 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: monmap epoch 3 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: last_changed 2026-03-10T13:36:03.434471+0000 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: min_mon_release 19 (squid) 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: election_strategy: 1 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: fsmap 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: mgrmap e13: y(active, since 21s) 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: overall HEALTH_OK 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:09.053 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.227+0000 7f006e958640 1 -- 192.168.123.105:0/2839215915 >> v1:192.168.123.105:6789/0 conn(0x7f00600a45d0 legacy=0x7f00600a49b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.228+0000 7f006e958640 1 -- 192.168.123.105:0/2839215915 shutdown_connections 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.228+0000 7f006e958640 1 -- 192.168.123.105:0/2839215915 >> 192.168.123.105:0/2839215915 conn(0x7f006001a120 msgr2=0x7f006001a530 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.228+0000 7f006e958640 1 -- 192.168.123.105:0/2839215915 shutdown_connections 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.228+0000 7f006e958640 1 -- 192.168.123.105:0/2839215915 wait complete. 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006e958640 1 Processor -- start 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006e958640 1 -- start start 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006e958640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f006015c570 con 0x7f00600a45d0 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006e958640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f006015d770 con 0x7f00600b4c60 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006e958640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f006015e970 con 0x7f00601588d0 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006d956640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f00600a45d0 0x7f00600b0d00 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:41444/0 (socket says 192.168.123.105:41444) 2026-03-10T13:36:09.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.229+0000 7f006d956640 1 -- 192.168.123.105:0/3701468169 learned_addr learned my addr 192.168.123.105:0/3701468169 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.230+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2386806327 0 0) 0x7f006015c570 con 0x7f00600a45d0 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.230+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0044003620 con 0x7f00600a45d0 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.230+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2731538485 0 0) 0x7f006015d770 con 0x7f00600b4c60 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f006015c570 con 0x7f00600b4c60 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 239987966 0 0) 0x7f0044003620 con 0x7f00600a45d0 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f006015d770 con 0x7f00600a45d0 2026-03-10T13:36:09.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0064003060 con 0x7f00600a45d0 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3057731953 0 0) 0x7f006015d770 con 0x7f00600a45d0 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 >> v1:192.168.123.105:6790/0 conn(0x7f00601588d0 legacy=0x7f006015acc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 >> v1:192.168.123.109:6789/0 conn(0x7f00600b4c60 legacy=0x7f00600b1410 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f006015fb70 con 0x7f00600a45d0 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.231+0000 7f006e958640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f006015c7a0 con 0x7f00600a45d0 2026-03-10T13:36:09.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.232+0000 7f006e958640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f006015cd50 con 0x7f00600a45d0 2026-03-10T13:36:09.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.232+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f00640045c0 con 0x7f00600a45d0 2026-03-10T13:36:09.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.233+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0064004e20 con 0x7f00600a45d0 2026-03-10T13:36:09.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.233+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f00640050a0 con 0x7f00600a45d0 2026-03-10T13:36:09.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.233+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7f006404e3d0 con 0x7f00600a45d0 2026-03-10T13:36:09.237 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.234+0000 7f006e958640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f00600a5af0 con 0x7f00600a45d0 2026-03-10T13:36:09.238 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.237+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0064018ab0 con 0x7f00600a45d0 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.339+0000 7f006e958640 1 -- 192.168.123.105:0/3701468169 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f00600b7990 con 0x7f00600a45d0 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.339+0000 7f005affd640 1 -- 192.168.123.105:0/3701468169 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+199 (unknown 2189217731 0 609017685) 0x7f00640147a0 con 0x7f00600a45d0 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stdout:# minimal ceph.conf for e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stdout: fsid = e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:09.340 INFO:teuthology.orchestra.run.vm05.stdout: mon_host = 192.168.123.105:6789/0 192.168.123.109:6789/0 v1:192.168.123.105:6790/0 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.342+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 >> v1:192.168.123.105:6800/3845654103 conn(0x7f004403e870 legacy=0x7f0044040d30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.342+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 >> v1:192.168.123.105:6789/0 conn(0x7f00600a45d0 legacy=0x7f00600b0d00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.343+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 shutdown_connections 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.343+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 >> 192.168.123.105:0/3701468169 conn(0x7f006001a120 msgr2=0x7f00600ab200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.343+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 shutdown_connections 2026-03-10T13:36:09.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:09.343+0000 7f0058ff9640 1 -- 192.168.123.105:0/3701468169 wait complete. 2026-03-10T13:36:09.512 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T13:36:09.512 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:36:09.513 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:36:09.565 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:36:09.565 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:36:09.636 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:36:09.636 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:36:09.659 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:36:09.659 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:36:09.723 INFO:tasks.cephadm:Adding mgr.y on vm05 2026-03-10T13:36:09.723 INFO:tasks.cephadm:Adding mgr.x on vm09 2026-03-10T13:36:09.723 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply mgr '2;vm05=y;vm09=x' 2026-03-10T13:36:09.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:36:09.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:36:09.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:09.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/837882518' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Reconfiguring daemon mon.a on vm05 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3701468169' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Reconfiguring mon.c (monmap changed)... 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: Reconfiguring daemon mon.c on vm05 2026-03-10T13:36:09.804 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:09.930 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.064+0000 7fe2426c5640 1 -- 192.168.123.109:0/2539626633 >> v1:192.168.123.105:6789/0 conn(0x7fe23c100390 legacy=0x7fe23c100790 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.065+0000 7fe2426c5640 1 -- 192.168.123.109:0/2539626633 shutdown_connections 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.065+0000 7fe2426c5640 1 -- 192.168.123.109:0/2539626633 >> 192.168.123.109:0/2539626633 conn(0x7fe23c0fbb60 msgr2=0x7fe23c0fdf80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.065+0000 7fe2426c5640 1 -- 192.168.123.109:0/2539626633 shutdown_connections 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.065+0000 7fe2426c5640 1 -- 192.168.123.109:0/2539626633 wait complete. 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe2426c5640 1 Processor -- start 2026-03-10T13:36:10.065 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe2426c5640 1 -- start start 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe2426c5640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe23c1b3670 con 0x7fe23c110310 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe2426c5640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe23c1b4850 con 0x7fe23c1157f0 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe2426c5640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe23c1b5a30 con 0x7fe23c100390 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe241ec4640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fe23c1157f0 0x7fe23c1b1f50 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:44326/0 (socket says 192.168.123.109:44326) 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.066+0000 7fe241ec4640 1 -- 192.168.123.109:0/1609368208 learned_addr learned my addr 192.168.123.109:0/1609368208 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4028003422 0 0) 0x7fe23c1b4850 con 0x7fe23c1157f0 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe218003620 con 0x7fe23c1157f0 2026-03-10T13:36:10.066 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2943532291 0 0) 0x7fe23c1b3670 con 0x7fe23c110310 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe23c1b4850 con 0x7fe23c110310 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2942052812 0 0) 0x7fe218003620 con 0x7fe23c1157f0 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe23c1b3670 con 0x7fe23c1157f0 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe238003400 con 0x7fe23c1157f0 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3189051342 0 0) 0x7fe23c1b4850 con 0x7fe23c110310 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe218003620 con 0x7fe23c110310 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe2300030a0 con 0x7fe23c110310 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1492421931 0 0) 0x7fe23c1b3670 con 0x7fe23c1157f0 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 >> v1:192.168.123.105:6790/0 conn(0x7fe23c100390 legacy=0x7fe23c10fc00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 >> v1:192.168.123.105:6789/0 conn(0x7fe23c110310 legacy=0x7fe23c1140d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:10.067 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe23c1b6c10 con 0x7fe23c1157f0 2026-03-10T13:36:10.068 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.067+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe23c1b4a20 con 0x7fe23c1157f0 2026-03-10T13:36:10.068 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.068+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe23c1b5050 con 0x7fe23c1157f0 2026-03-10T13:36:10.068 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.068+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fe2380051f0 con 0x7fe23c1157f0 2026-03-10T13:36:10.068 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.068+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe2380060b0 con 0x7fe23c1157f0 2026-03-10T13:36:10.069 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.069+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7fe2380126e0 con 0x7fe23c1157f0 2026-03-10T13:36:10.069 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.069+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe23c1064d0 con 0x7fe23c1157f0 2026-03-10T13:36:10.071 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.070+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7fe238014870 con 0x7fe23c1157f0 2026-03-10T13:36:10.077 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.078+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fe2380056b0 con 0x7fe23c1157f0 2026-03-10T13:36:10.176 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.176+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm05=y;vm09=x", "target": ["mon-mgr", ""]}) -- 0x7fe23c1036b0 con 0x7fe21803ebd0 2026-03-10T13:36:10.183 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.182+0000 7fe22a7fc640 1 -- 192.168.123.109:0/1609368208 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (unknown 0 0 325935098) 0x7fe23c1036b0 con 0x7fe21803ebd0 2026-03-10T13:36:10.186 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-10T13:36:10.186 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.186+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 >> v1:192.168.123.105:6800/3845654103 conn(0x7fe21803ebd0 legacy=0x7fe218041090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:10.186 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.186+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 >> v1:192.168.123.109:6789/0 conn(0x7fe23c1157f0 legacy=0x7fe23c1b1f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:10.187 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.187+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 shutdown_connections 2026-03-10T13:36:10.187 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.187+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 >> 192.168.123.109:0/1609368208 conn(0x7fe23c0fbb60 msgr2=0x7fe23c0fdf50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:10.187 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.187+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 shutdown_connections 2026-03-10T13:36:10.187 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:10.187+0000 7fe2426c5640 1 -- 192.168.123.109:0/1609368208 wait complete. 2026-03-10T13:36:10.338 DEBUG:teuthology.orchestra.run.vm09:mgr.x> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.x.service 2026-03-10T13:36:10.339 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T13:36:10.339 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:36:10.340 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:36:10.357 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:36:10.358 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-10T13:36:10.421 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-10T13:36:10.421 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-10T13:36:10.421 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-10T13:36:10.421 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-10T13:36:10.421 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-10T13:36:10.421 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:36:10.421 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:36:10.421 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 13:35:50.562345157 +0000 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 13:33:36.899514590 +0000 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 13:33:36.899514590 +0000 2026-03-10T13:36:10.477 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 13:31:05.332000000 +0000 2026-03-10T13:36:10.477 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:36:10.541 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T13:36:10.541 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T13:36:10.541 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000191548 s, 2.7 MB/s 2026-03-10T13:36:10.543 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:36:10.601 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 13:35:50.617345260 +0000 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 13:33:36.909515121 +0000 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 13:33:36.909515121 +0000 2026-03-10T13:36:10.660 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 13:31:05.334000000 +0000 2026-03-10T13:36:10.660 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:36:10.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Deploying daemon mon.c on vm05 2026-03-10T13:36:10.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:10.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.a calling monitor election 2026-03-10T13:36:10.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/1049391118' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.b calling monitor election 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: monmap epoch 2 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: last_changed 2026-03-10T13:35:58.298421+0000 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: min_mon_release 19 (squid) 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: election_strategy: 1 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: fsmap 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mgrmap e13: y(active, since 16s) 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: overall HEALTH_OK 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.a calling monitor election 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.b calling monitor election 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.722 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: monmap epoch 3 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: last_changed 2026-03-10T13:36:03.434471+0000 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: min_mon_release 19 (squid) 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: election_strategy: 1 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: fsmap 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: mgrmap e13: y(active, since 21s) 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: overall HEALTH_OK 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/837882518' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Reconfiguring daemon mon.a on vm05 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3701468169' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Reconfiguring mon.c (monmap changed)... 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: Reconfiguring daemon mon.c on vm05 2026-03-10T13:36:10.723 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:10.723 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T13:36:10.724 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T13:36:10.724 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000124633 s, 4.1 MB/s 2026-03-10T13:36:10.724 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:36:10.779 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 13:35:50.645345312 +0000 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 13:33:36.900514643 +0000 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 13:33:36.900514643 +0000 2026-03-10T13:36:10.836 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 13:31:05.336000000 +0000 2026-03-10T13:36:10.836 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:36:10.899 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T13:36:10.899 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T13:36:10.899 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000166601 s, 3.1 MB/s 2026-03-10T13:36:10.900 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:36:10.956 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 13:35:50.676345370 +0000 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 13:33:36.896514431 +0000 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 13:33:36.896514431 +0000 2026-03-10T13:36:11.015 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-10 13:31:05.404000000 +0000 2026-03-10T13:36:11.015 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:36:11.083 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T13:36:11.083 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T13:36:11.083 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00017025 s, 3.0 MB/s 2026-03-10T13:36:11.084 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:36:11.142 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:36:11.142 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:36:11.159 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:36:11.159 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T13:36:11.220 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 podman[54614]: 2026-03-10 13:36:11.219403876 +0000 UTC m=+0.023298174 container create 15c4a5b90f703dc23149560a5c0b0654a9bed8a2912f7db9288e1266f1d844be (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0) 2026-03-10T13:36:11.223 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T13:36:11.223 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T13:36:11.223 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T13:36:11.223 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T13:36:11.223 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T13:36:11.223 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:36:11.223 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:36:11.223 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 13:35:55.696835055 +0000 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 13:33:33.469293388 +0000 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 13:33:33.469293388 +0000 2026-03-10T13:36:11.288 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 13:30:34.307000000 +0000 2026-03-10T13:36:11.289 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:36:11.363 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T13:36:11.363 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T13:36:11.363 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000682666 s, 750 kB/s 2026-03-10T13:36:11.365 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:36:11.511 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 13:35:55.732834974 +0000 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 13:33:33.468293386 +0000 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 13:33:33.468293386 +0000 2026-03-10T13:36:11.534 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 13:30:34.311000000 +0000 2026-03-10T13:36:11.535 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 podman[54614]: 2026-03-10 13:36:11.266477531 +0000 UTC m=+0.070371829 container init 15c4a5b90f703dc23149560a5c0b0654a9bed8a2912f7db9288e1266f1d844be (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 podman[54614]: 2026-03-10 13:36:11.272018936 +0000 UTC m=+0.075913234 container start 15c4a5b90f703dc23149560a5c0b0654a9bed8a2912f7db9288e1266f1d844be (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 bash[54614]: 15c4a5b90f703dc23149560a5c0b0654a9bed8a2912f7db9288e1266f1d844be 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 podman[54614]: 2026-03-10 13:36:11.208910228 +0000 UTC m=+0.012804537 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 systemd[1]: Started Ceph mgr.x for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:11.384+0000 7f14010e8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:36:11.652 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:11.435+0000 7f14010e8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:36:11.661 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T13:36:11.661 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T13:36:11.661 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.00649745 s, 78.8 kB/s 2026-03-10T13:36:11.663 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:36:11.701 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 13:35:55.761834909 +0000 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 13:33:33.505293448 +0000 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 13:33:33.505293448 +0000 2026-03-10T13:36:11.803 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 13:30:34.318000000 +0000 2026-03-10T13:36:11.803 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:36:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mon.c calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mon.c calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mon.a calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mon.b calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: monmap epoch 3 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: last_changed 2026-03-10T13:36:03.434471+0000 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: min_mon_release 19 (squid) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: election_strategy: 1 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: fsmap 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: mgrmap e13: y(active, since 23s) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: overall HEALTH_OK 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mon.c calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mon.c calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mon.a calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mon.b calling monitor election 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: monmap epoch 3 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: fsid e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: last_changed 2026-03-10T13:36:03.434471+0000 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: created 2026-03-10T13:35:21.154333+0000 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: min_mon_release 19 (squid) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: election_strategy: 1 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: 0: v1:192.168.123.105:6789/0 mon.a 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: 1: v1:192.168.123.109:6789/0 mon.b 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: 2: v1:192.168.123.105:6790/0 mon.c 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: fsmap 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: mgrmap e13: y(active, since 23s) 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: overall HEALTH_OK 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:11.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:11.848 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T13:36:11.848 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T13:36:11.848 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000288259 s, 1.8 MB/s 2026-03-10T13:36:11.849 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:36:11.878 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T13:36:11.915 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:11 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:11.911+0000 7f14010e8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:36:11.935 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T13:36:11.935 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T13:36:11.935 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T13:36:11.935 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:36:11.935 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T13:36:11.936 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 13:35:55.785834855 +0000 2026-03-10T13:36:11.936 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 13:33:33.507293452 +0000 2026-03-10T13:36:11.936 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 13:33:33.507293452 +0000 2026-03-10T13:36:11.936 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 13:30:34.348000000 +0000 2026-03-10T13:36:11.936 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:36:12.009 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T13:36:12.009 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T13:36:12.009 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000213278 s, 2.4 MB/s 2026-03-10T13:36:12.010 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:36:12.074 INFO:tasks.cephadm:Deploying osd.0 on vm05 with /dev/vde... 2026-03-10T13:36:12.074 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vde 2026-03-10T13:36:12.531 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:12.265+0000 7f14010e8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:36:12.531 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:36:12.531 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:36:12.531 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: from numpy import show_config as show_numpy_config 2026-03-10T13:36:12.532 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:12.350+0000 7f14010e8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:36:12.532 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:12.385+0000 7f14010e8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:36:12.532 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:12 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:12.464+0000 7f14010e8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:36:12.650 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:12.651 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:36:12 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:36:12.432+0000 7ff7d12b6640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T13:36:12.666 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:12 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.008+0000 7f14010e8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.125+0000 7f14010e8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.166+0000 7f14010e8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.202+0000 7f14010e8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.248+0000 7f14010e8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:36:13.287 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.287+0000 7f14010e8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:36:13.553 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.458+0000 7f14010e8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:36:13.554 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.508+0000 7f14010e8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:36:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T13:36:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: Reconfiguring daemon mgr.y on vm05 2026-03-10T13:36:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: Reconfiguring daemon mgr.y on vm05 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:13 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:13 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:13.733+0000 7f14010e8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: Reconfiguring mgr.y (unknown last config time)... 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: Reconfiguring daemon mgr.y on vm05 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:13 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:14.031 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:36:14.053 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm05:/dev/vde 2026-03-10T13:36:14.224 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:14.284 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.007+0000 7f14010e8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:36:14.284 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.046+0000 7f14010e8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:36:14.284 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.087+0000 7f14010e8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:36:14.284 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.165+0000 7f14010e8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:36:14.284 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.202+0000 7f14010e8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:36:14.374 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.373+0000 7f80dd28c640 1 -- 192.168.123.105:0/4220532616 >> v1:192.168.123.105:6789/0 conn(0x7f80d810c8e0 legacy=0x7f80d810eda0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:14.374 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.374+0000 7f80dd28c640 1 -- 192.168.123.105:0/4220532616 shutdown_connections 2026-03-10T13:36:14.374 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.374+0000 7f80dd28c640 1 -- 192.168.123.105:0/4220532616 >> 192.168.123.105:0/4220532616 conn(0x7f80d80fff30 msgr2=0x7f80d8102370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:14.374 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.374+0000 7f80dd28c640 1 -- 192.168.123.105:0/4220532616 shutdown_connections 2026-03-10T13:36:14.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.374+0000 7f80dd28c640 1 -- 192.168.123.105:0/4220532616 wait complete. 2026-03-10T13:36:14.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.374+0000 7f80dd28c640 1 Processor -- start 2026-03-10T13:36:14.375 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dd28c640 1 -- start start 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dd28c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80d819c5c0 con 0x7f80d810c8e0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dd28c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80d81a7d80 con 0x7f80d81047a0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dd28c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80d81a8f60 con 0x7f80d8108bd0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dca8b640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f80d810c8e0 0x7f80d81a5650 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:41462/0 (socket says 192.168.123.105:41462) 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80dca8b640 1 -- 192.168.123.105:0/3464830830 learned_addr learned my addr 192.168.123.105:0/3464830830 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2898796946 0 0) 0x7f80d819c5c0 con 0x7f80d810c8e0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f80ac003620 con 0x7f80d810c8e0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.375+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1644354550 0 0) 0x7f80d81a8f60 con 0x7f80d8108bd0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f80d819c5c0 con 0x7f80d8108bd0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3482667992 0 0) 0x7f80ac003620 con 0x7f80d810c8e0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f80d81a8f60 con 0x7f80d810c8e0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1771020596 0 0) 0x7f80d819c5c0 con 0x7f80d8108bd0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f80ac003620 con 0x7f80d8108bd0 2026-03-10T13:36:14.376 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f80cc003160 con 0x7f80d810c8e0 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f80c4002ca0 con 0x7f80d8108bd0 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1463728320 0 0) 0x7f80d81a8f60 con 0x7f80d810c8e0 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 >> v1:192.168.123.105:6790/0 conn(0x7f80d8108bd0 legacy=0x7f80d81a1f20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.376+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 >> v1:192.168.123.109:6789/0 conn(0x7f80d81047a0 legacy=0x7f80d819ba40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.377+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f80d81aa140 con 0x7f80d810c8e0 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.377+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f80d81a9190 con 0x7f80d810c8e0 2026-03-10T13:36:14.377 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.377+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f80cc003e10 con 0x7f80d810c8e0 2026-03-10T13:36:14.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.378+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f80d81a9770 con 0x7f80d810c8e0 2026-03-10T13:36:14.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.378+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f80cc005300 con 0x7f80d810c8e0 2026-03-10T13:36:14.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.378+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 13) ==== 50271+0+0 (unknown 2642844410 0 0) 0x7f80cc005560 con 0x7f80d810c8e0 2026-03-10T13:36:14.379 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.378+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (unknown 528678538 0 0) 0x7f80cc003810 con 0x7f80d810c8e0 2026-03-10T13:36:14.382 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.378+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f80d81107d0 con 0x7f80d810c8e0 2026-03-10T13:36:14.382 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.382+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f80cc018f10 con 0x7f80d810c8e0 2026-03-10T13:36:14.480 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:14.480+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f80d81a9df0 con 0x7f80ac03eb40 2026-03-10T13:36:14.542 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.284+0000 7f14010e8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:36:14.542 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.401+0000 7f14010e8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:36:14.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:14.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:14.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:14.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:14 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:14 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:14 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:14 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:14.923 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.542+0000 7f14010e8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:36:14.923 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:36:14 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:36:14.580+0000 7f14010e8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:36:15.576 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:15.575+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 2251000709 0 0) 0x7f80cc018130 con 0x7f80d810c8e0 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='client.14214 v1:192.168.123.105:0/3464830830' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: Standby manager daemon x started 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]': finished 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='client.14214 v1:192.168.123.105:0/3464830830' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: Standby manager daemon x started 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]: dispatch 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]': finished 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:36:15.807 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:15 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='client.14214 v1:192.168.123.105:0/3464830830' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: Standby manager daemon x started 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/4221436370' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]: dispatch 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/435409724' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "27418eee-abb2-4d75-aadf-ed68d081290c"}]': finished 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:36:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:15 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:16 vm09 ceph-mon[53367]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T13:36:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:16 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:36:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2452380482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[51512]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2452380482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[58955]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:36:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2452380482' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:17 vm09 ceph-mon[53367]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:17 vm05 ceph-mon[51512]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:17 vm05 ceph-mon[58955]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:19.750 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:19 vm05 ceph-mon[51512]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:19.750 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:19 vm05 ceph-mon[58955]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:19 vm09 ceph-mon[53367]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[51512]: Deploying daemon osd.0 on vm05 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:20.900 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:20 vm05 ceph-mon[58955]: Deploying daemon osd.0 on vm05 2026-03-10T13:36:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:36:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:20 vm09 ceph-mon[53367]: Deploying daemon osd.0 on vm05 2026-03-10T13:36:21.748 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:21 vm05 ceph-mon[51512]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:21.749 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:21 vm05 ceph-mon[58955]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:21 vm09 ceph-mon[53367]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:22.858 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:22.858 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:22.858 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:22.858 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:22.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:22.859 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:22 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:22 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:22 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:22 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 0 on host 'vm05' 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.252+0000 7f80d57fa640 1 -- 192.168.123.105:0/3464830830 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 4132516384) 0x7f80d81a9df0 con 0x7f80ac03eb40 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 >> v1:192.168.123.105:6800/3845654103 conn(0x7f80ac03eb40 legacy=0x7f80ac041000 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 >> v1:192.168.123.105:6789/0 conn(0x7f80d810c8e0 legacy=0x7f80d81a5650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 shutdown_connections 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 >> 192.168.123.105:0/3464830830 conn(0x7f80d80fff30 msgr2=0x7f80d810b780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:23.254 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 shutdown_connections 2026-03-10T13:36:23.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:23.254+0000 7f80dd28c640 1 -- 192.168.123.105:0/3464830830 wait complete. 2026-03-10T13:36:23.412 DEBUG:teuthology.orchestra.run.vm05:osd.0> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.0.service 2026-03-10T13:36:23.417 INFO:tasks.cephadm:Deploying osd.1 on vm05 with /dev/vdd... 2026-03-10T13:36:23.417 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdd 2026-03-10T13:36:23.738 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:23.954 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:23 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:23 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.221 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T13:36:23.968+0000 7fea352c2740 -1 osd.0 0 log_to_monitors true 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: Detected new or changed devices on vm05 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: Detected new or changed devices on vm05 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: Detected new or changed devices on vm05 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:25.221 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:36:25.235 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-10T13:36:25.387 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.514+0000 7f4df980c640 1 -- 192.168.123.105:0/1901136508 >> v1:192.168.123.105:6790/0 conn(0x7f4df4108dc0 legacy=0x7f4df410b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.514+0000 7f4df980c640 1 -- 192.168.123.105:0/1901136508 shutdown_connections 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.514+0000 7f4df980c640 1 -- 192.168.123.105:0/1901136508 >> 192.168.123.105:0/1901136508 conn(0x7f4df4100120 msgr2=0x7f4df4102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- 192.168.123.105:0/1901136508 shutdown_connections 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- 192.168.123.105:0/1901136508 wait complete. 2026-03-10T13:36:25.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 Processor -- start 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- start start 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4df419c670 con 0x7f4df4104990 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4df41a7e30 con 0x7f4df4108dc0 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.515+0000 7f4df980c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4df41a9010 con 0x7f4df410cad0 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df37fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f4df410cad0 0x7f4df41a5700 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58688/0 (socket says 192.168.123.105:58688) 2026-03-10T13:36:25.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df37fe640 1 -- 192.168.123.105:0/804969454 learned_addr learned my addr 192.168.123.105:0/804969454 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1818787081 0 0) 0x7f4df41a9010 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4dc8003620 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1372340108 0 0) 0x7f4df41a7e30 con 0x7f4df4108dc0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4df41a9010 con 0x7f4df4108dc0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 703168610 0 0) 0x7f4dc8003620 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4df41a7e30 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4de4003520 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4271773033 0 0) 0x7f4df41a9010 con 0x7f4df4108dc0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4dc8003620 con 0x7f4df4108dc0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4de8002b50 con 0x7f4df4108dc0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1292628045 0 0) 0x7f4df41a7e30 con 0x7f4df410cad0 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 >> v1:192.168.123.109:6789/0 conn(0x7f4df4108dc0 legacy=0x7f4df41a1fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.516+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 >> v1:192.168.123.105:6789/0 conn(0x7f4df4104990 legacy=0x7f4df419baf0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:36:25.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.517+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4df41aa1f0 con 0x7f4df410cad0 2026-03-10T13:36:25.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.517+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f4df41a8060 con 0x7f4df410cad0 2026-03-10T13:36:25.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.517+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4de4003ee0 con 0x7f4df410cad0 2026-03-10T13:36:25.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.518+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f4df41a8640 con 0x7f4df410cad0 2026-03-10T13:36:25.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.519+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4de40060f0 con 0x7f4df410cad0 2026-03-10T13:36:25.520 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.519+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 2251000709 0 0) 0x7f4de4006350 con 0x7f4df410cad0 2026-03-10T13:36:25.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.519+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4db8005180 con 0x7f4df410cad0 2026-03-10T13:36:25.522 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.520+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(6..6 src has 1..6) ==== 1275+0+0 (unknown 3094176658 0 0) 0x7f4de4004770 con 0x7f4df410cad0 2026-03-10T13:36:25.523 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.523+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f4de405f2e0 con 0x7f4df410cad0 2026-03-10T13:36:25.615 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:25.615+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f4db8002bf0 con 0x7f4dc8077fb0 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:25.987 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:25 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='osd.0 v1:192.168.123.105:6801/3141950523' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:25 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='client.24137 v1:192.168.123.105:0/804969454' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]': finished 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[51512]: from='osd.0 ' entity='osd.0' 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='client.24137 v1:192.168.123.105:0/804969454' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]': finished 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:26.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:26 vm05 ceph-mon[58955]: from='osd.0 ' entity='osd.0' 2026-03-10T13:36:26.832 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:26 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T13:36:26.721+0000 7fea31a56640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T13:36:26.832 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:26 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T13:36:26.732+0000 7fea2c86c640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='client.24137 v1:192.168.123.105:0/804969454' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302886966' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f512d6be-c3f7-4742-a120-ab1907d08ac3"}]': finished 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: osdmap e8: 2 total, 0 up, 2 in 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:26 vm09 ceph-mon[53367]: from='osd.0 ' entity='osd.0' 2026-03-10T13:36:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3018031408' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3018031408' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:27 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3018031408' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:27 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[51512]: osd.0 v1:192.168.123.105:6801/3141950523 boot 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[51512]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[58955]: osd.0 v1:192.168.123.105:6801/3141950523 boot 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[58955]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:28 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:28 vm09 ceph-mon[53367]: osd.0 v1:192.168.123.105:6801/3141950523 boot 2026-03-10T13:36:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:28 vm09 ceph-mon[53367]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T13:36:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:28 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:36:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:28 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[51512]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[51512]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[58955]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[58955]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:36:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:29 vm09 ceph-mon[53367]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:36:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:29 vm09 ceph-mon[53367]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:36:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[58955]: pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[58955]: Deploying daemon osd.1 on vm05 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[51512]: pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:31.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:31 vm05 ceph-mon[51512]: Deploying daemon osd.1 on vm05 2026-03-10T13:36:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:31 vm09 ceph-mon[53367]: pgmap v22: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:36:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:31 vm09 ceph-mon[53367]: Deploying daemon osd.1 on vm05 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[51512]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[58955]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:33 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:33 vm09 ceph-mon[53367]: pgmap v23: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:33 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:33 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:33 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:34.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.596+0000 7f4df880a640 1 -- 192.168.123.105:0/804969454 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 2847272575) 0x7f4db8002bf0 con 0x7f4dc8077fb0 2026-03-10T13:36:34.599 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 1 on host 'vm05' 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 >> v1:192.168.123.105:6800/3845654103 conn(0x7f4dc8077fb0 legacy=0x7f4dc807a470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 >> v1:192.168.123.105:6790/0 conn(0x7f4df410cad0 legacy=0x7f4df41a5700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 shutdown_connections 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 >> 192.168.123.105:0/804969454 conn(0x7f4df4100120 msgr2=0x7f4df410b940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 shutdown_connections 2026-03-10T13:36:34.600 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:34.600+0000 7f4df980c640 1 -- 192.168.123.105:0/804969454 wait complete. 2026-03-10T13:36:34.768 DEBUG:teuthology.orchestra.run.vm05:osd.1> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.1.service 2026-03-10T13:36:34.770 INFO:tasks.cephadm:Deploying osd.2 on vm05 with /dev/vdc... 2026-03-10T13:36:34.770 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdc 2026-03-10T13:36:35.066 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:35.299 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:35 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T13:36:35.002+0000 7f83e8ddd740 -1 osd.1 0 log_to_monitors true 2026-03-10T13:36:35.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:35.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.300 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:36.472 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:36:36.487 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-10T13:36:36.642 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:36.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.768+0000 7f31a5efa640 1 -- 192.168.123.105:0/1216864158 >> v1:192.168.123.105:6789/0 conn(0x7f31a010cd80 legacy=0x7f31a010f170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.769+0000 7f31a5efa640 1 -- 192.168.123.105:0/1216864158 shutdown_connections 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.769+0000 7f31a5efa640 1 -- 192.168.123.105:0/1216864158 >> 192.168.123.105:0/1216864158 conn(0x7f31a00fde70 msgr2=0x7f31a01002d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.769+0000 7f31a5efa640 1 -- 192.168.123.105:0/1216864158 shutdown_connections 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.769+0000 7f31a5efa640 1 -- 192.168.123.105:0/1216864158 wait complete. 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f31a5efa640 1 Processor -- start 2026-03-10T13:36:36.770 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f31a5efa640 1 -- start start 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f31a5efa640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f31a019c730 con 0x7f31a010cd80 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f31a5efa640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f31a01a7ef0 con 0x7f31a0075bb0 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f31a5efa640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f31a01a90d0 con 0x7f31a0077040 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f319f7fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f31a0075bb0 0x7f31a019bbb0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:45356/0 (socket says 192.168.123.105:45356) 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.770+0000 7f319f7fe640 1 -- 192.168.123.105:0/1325264997 learned_addr learned my addr 192.168.123.105:0/1325264997 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 957339566 0 0) 0x7f31a01a7ef0 con 0x7f31a0075bb0 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3174003620 con 0x7f31a0075bb0 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3702883516 0 0) 0x7f31a01a90d0 con 0x7f31a0077040 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f31a01a7ef0 con 0x7f31a0077040 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2691059823 0 0) 0x7f31a019c730 con 0x7f31a010cd80 2026-03-10T13:36:36.771 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f31a01a90d0 con 0x7f31a010cd80 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 845747355 0 0) 0x7f3174003620 con 0x7f31a0075bb0 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.771+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f31a019c730 con 0x7f31a0075bb0 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1174073358 0 0) 0x7f31a01a7ef0 con 0x7f31a0077040 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3174003620 con 0x7f31a0077040 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3178108157 0 0) 0x7f31a01a90d0 con 0x7f31a010cd80 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f31a01a7ef0 con 0x7f31a010cd80 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f3190003260 con 0x7f31a0075bb0 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f318c002ef0 con 0x7f31a0077040 2026-03-10T13:36:36.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f3194003390 con 0x7f31a010cd80 2026-03-10T13:36:36.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1263702150 0 0) 0x7f31a019c730 con 0x7f31a0075bb0 2026-03-10T13:36:36.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.772+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 >> v1:192.168.123.105:6790/0 conn(0x7f31a0077040 legacy=0x7f31a01a2090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:36.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.773+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 >> v1:192.168.123.105:6789/0 conn(0x7f31a010cd80 legacy=0x7f31a01a57c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:36.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.773+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f31a01aa2b0 con 0x7f31a0075bb0 2026-03-10T13:36:36.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.773+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f31a01a9300 con 0x7f31a0075bb0 2026-03-10T13:36:36.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.773+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f31a01a98e0 con 0x7f31a0075bb0 2026-03-10T13:36:36.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.773+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3190003ca0 con 0x7f31a0075bb0 2026-03-10T13:36:36.774 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.774+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f3190005d50 con 0x7f31a0075bb0 2026-03-10T13:36:36.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.774+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 2251000709 0 0) 0x7f319001e640 con 0x7f31a0075bb0 2026-03-10T13:36:36.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.774+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3164005180 con 0x7f31a0075bb0 2026-03-10T13:36:36.775 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.775+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(11..11 src has 1..11) ==== 1667+0+0 (unknown 3340396281 0 0) 0x7f3190093b90 con 0x7f31a0075bb0 2026-03-10T13:36:36.778 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.778+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f319005dde0 con 0x7f31a0075bb0 2026-03-10T13:36:36.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:36.870+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f3164002bf0 con 0x7f3174080e00 2026-03-10T13:36:37.017 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T13:36:37.015+0000 7f83e4d5e640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: Detected new or changed devices on vm05 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: Detected new or changed devices on vm05 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:37.269 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:37 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T13:36:37.021+0000 7f83e0b88640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: Detected new or changed devices on vm05 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: pgmap v26: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='client.24151 v1:192.168.123.105:0/1325264997' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:36:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]': finished 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: osd.1 v1:192.168.123.105:6805/1936282018 boot 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: pgmap v26: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='client.24151 v1:192.168.123.105:0/1325264997' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]': finished 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: osd.1 v1:192.168.123.105:6805/1936282018 boot 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:38 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: pgmap v26: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='client.24151 v1:192.168.123.105:0/1325264997' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='osd.1 v1:192.168.123.105:6805/1936282018' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3930908373' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a686b53f-59af-40c9-a5d6-bde07754c934"}]': finished 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: osd.1 v1:192.168.123.105:6805/1936282018 boot 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: osdmap e13: 3 total, 2 up, 3 in 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:36:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:38 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4249065413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[51512]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4249065413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[58955]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T13:36:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:39 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4249065413' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:39 vm09 ceph-mon[53367]: osdmap e14: 3 total, 2 up, 3 in 2026-03-10T13:36:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:39 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:40 vm05 ceph-mon[51512]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:40 vm05 ceph-mon[58955]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:40 vm09 ceph-mon[53367]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:42.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:42 vm05 ceph-mon[51512]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:42.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:42 vm05 ceph-mon[58955]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:42 vm09 ceph-mon[53367]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:43.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:36:43.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:43.152 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[58955]: Deploying daemon osd.2 on vm05 2026-03-10T13:36:43.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:36:43.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:43.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:43 vm05 ceph-mon[51512]: Deploying daemon osd.2 on vm05 2026-03-10T13:36:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:43 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:36:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:43 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:43 vm09 ceph-mon[53367]: Deploying daemon osd.2 on vm05 2026-03-10T13:36:44.373 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:44 vm05 ceph-mon[58955]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:44.374 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:44 vm05 ceph-mon[51512]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:44 vm09 ceph-mon[53367]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:45.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:45 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:45 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:45.708 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 2 on host 'vm05' 2026-03-10T13:36:45.708 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.707+0000 7f319cff9640 1 -- 192.168.123.105:0/1325264997 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 1234733726) 0x7f3164002bf0 con 0x7f3174080e00 2026-03-10T13:36:45.713 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.711+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 >> v1:192.168.123.105:6800/3845654103 conn(0x7f3174080e00 legacy=0x7f31740832c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:45.713 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.711+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 >> v1:192.168.123.109:6789/0 conn(0x7f31a0075bb0 legacy=0x7f31a019bbb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:45.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.713+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 shutdown_connections 2026-03-10T13:36:45.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.713+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 >> 192.168.123.105:0/1325264997 conn(0x7f31a00fde70 msgr2=0x7f31a00ff900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:45.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.713+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 shutdown_connections 2026-03-10T13:36:45.714 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:45.713+0000 7f31a5efa640 1 -- 192.168.123.105:0/1325264997 wait complete. 2026-03-10T13:36:45.872 DEBUG:teuthology.orchestra.run.vm05:osd.2> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.2.service 2026-03-10T13:36:45.874 INFO:tasks.cephadm:Deploying osd.3 on vm05 with /dev/vdb... 2026-03-10T13:36:45.874 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdb 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[58955]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[58955]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[51512]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[51512]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:46.101 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:46 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:46.144 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:46 vm09 ceph-mon[53367]: pgmap v33: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:46 vm09 ceph-mon[53367]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:36:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:46 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:46 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:46 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.613 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:36:47.630 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm05:/dev/vdb 2026-03-10T13:36:47.810 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:36:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:36:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:36:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: Detected new or changed devices on vm05 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: Detected new or changed devices on vm05 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: Detected new or changed devices on vm05 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.958+0000 7f7cddaf9640 1 -- 192.168.123.105:0/3726802846 >> v1:192.168.123.105:6789/0 conn(0x7f7cd810c8e0 legacy=0x7f7cd810eda0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.958+0000 7f7cddaf9640 1 -- 192.168.123.105:0/3726802846 shutdown_connections 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.958+0000 7f7cddaf9640 1 -- 192.168.123.105:0/3726802846 >> 192.168.123.105:0/3726802846 conn(0x7f7cd80fff30 msgr2=0x7f7cd8102370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- 192.168.123.105:0/3726802846 shutdown_connections 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- 192.168.123.105:0/3726802846 wait complete. 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 Processor -- start 2026-03-10T13:36:47.959 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- start start 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cd819c860 con 0x7f7cd8108bd0 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cd81a8010 con 0x7f7cd81047a0 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.959+0000 7f7cddaf9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7cd81a91f0 con 0x7f7cd810c8e0 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7cdcaf7640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f7cd81047a0 0x7f7cd819bce0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:34974/0 (socket says 192.168.123.105:34974) 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7cdcaf7640 1 -- 192.168.123.105:0/2965872546 learned_addr learned my addr 192.168.123.105:0/2965872546 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3659069432 0 0) 0x7f7cd81a8010 con 0x7f7cd81047a0 2026-03-10T13:36:47.960 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7ca0003620 con 0x7f7cd81047a0 2026-03-10T13:36:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3071138765 0 0) 0x7f7cd819c860 con 0x7f7cd8108bd0 2026-03-10T13:36:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7cd81a8010 con 0x7f7cd8108bd0 2026-03-10T13:36:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2277224611 0 0) 0x7f7cd81a91f0 con 0x7f7cd810c8e0 2026-03-10T13:36:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7cd819c860 con 0x7f7cd810c8e0 2026-03-10T13:36:47.961 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.960+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2548014094 0 0) 0x7f7ca0003620 con 0x7f7cd81047a0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cd81a91f0 con 0x7f7cd81047a0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7cc0003200 con 0x7f7cd81047a0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3667058567 0 0) 0x7f7cd81a8010 con 0x7f7cd8108bd0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7ca0003620 con 0x7f7cd8108bd0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 17505328 0 0) 0x7f7cd819c860 con 0x7f7cd810c8e0 2026-03-10T13:36:47.962 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7cd81a8010 con 0x7f7cd810c8e0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7cc8002fb0 con 0x7f7cd8108bd0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7cd4003040 con 0x7f7cd810c8e0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.962+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1654005653 0 0) 0x7f7cd81a91f0 con 0x7f7cd81047a0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.963+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 >> v1:192.168.123.105:6790/0 conn(0x7f7cd810c8e0 legacy=0x7f7cd81a58e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.963+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 >> v1:192.168.123.105:6789/0 conn(0x7f7cd8108bd0 legacy=0x7f7cd81a21b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.963+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7cd81aa3d0 con 0x7f7cd81047a0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.963+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7cd81a7060 con 0x7f7cd81047a0 2026-03-10T13:36:47.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.963+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f7cd81a7640 con 0x7f7cd81047a0 2026-03-10T13:36:47.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.964+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7cc0003480 con 0x7f7cd81047a0 2026-03-10T13:36:47.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.964+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7ca4005180 con 0x7f7cd81047a0 2026-03-10T13:36:47.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.964+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7cc0005d10 con 0x7f7cd81047a0 2026-03-10T13:36:47.965 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.965+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 14) ==== 99944+0+0 (unknown 2251000709 0 0) 0x7f7cc001e5e0 con 0x7f7cd81047a0 2026-03-10T13:36:47.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.966+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(16..16 src has 1..16) ==== 1975+0+0 (unknown 2010545467 0 0) 0x7f7cc0093ca0 con 0x7f7cd81047a0 2026-03-10T13:36:47.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:47.968+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f7cc005ddc0 con 0x7f7cd81047a0 2026-03-10T13:36:48.071 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:48.070+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f7ca4002bf0 con 0x7f7ca0078440 2026-03-10T13:36:48.541 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:48.541 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:36:48.541 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.541 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='client.24175 v1:192.168.123.105:0/2965872546' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='client.24175 v1:192.168.123.105:0/2965872546' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.542 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:48 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T13:36:48.494+0000 7f6738a31640 -1 osd.2 0 waiting for initial osdmap 2026-03-10T13:36:48.542 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:48 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T13:36:48.502+0000 7f6733847640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='client.24175 v1:192.168.123.105:0/2965872546' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2930897235' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='client.24197 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='client.24197 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]': finished 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: osd.2 v1:192.168.123.105:6809/3999426341 boot 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1846281908' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2930897235' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='client.24197 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='client.24197 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]': finished 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: osd.2 v1:192.168.123.105:6809/3999426341 boot 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1846281908' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='osd.2 v1:192.168.123.105:6809/3999426341' entity='osd.2' 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: pgmap v37: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2930897235' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='client.24197 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]: dispatch 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='client.24197 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e9aa7ce5-7d1a-4946-9551-10bfc47bd58b"}]': finished 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: osd.2 v1:192.168.123.105:6809/3999426341 boot 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: osdmap e17: 4 total, 3 up, 4 in 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1846281908' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[58955]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[51512]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:50 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:51.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:50 vm09 ceph-mon[53367]: osdmap e18: 4 total, 3 up, 4 in 2026-03-10T13:36:51.435 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:50 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:51.435 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:50 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78315]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78315]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78315]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78319]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78319]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.331 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78319]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[58955]: pgmap v40: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[58955]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78423]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78423]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78423]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78311]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-10T13:36:52.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78311]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78311]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[51512]: pgmap v40: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[51512]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78325]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78325]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 sudo[78325]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:51 vm09 ceph-mon[53367]: pgmap v40: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:51 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:51 vm09 ceph-mon[53367]: osdmap e19: 4 total, 3 up, 4 in 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:51 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:51 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 sudo[55363]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 sudo[55363]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 sudo[55363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T13:36:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 sudo[55363]: pam_unix(sudo:session): session closed for user root 2026-03-10T13:36:52.986 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:52.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T13:36:52.987 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:52.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:52.988+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f7cc0058d50 con 0x7f7cd81047a0 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:53.279 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T13:36:53.280 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: osdmap e20: 4 total, 3 up, 4 in 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: osdmap e21: 4 total, 3 up, 4 in 2026-03-10T13:36:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[58955]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[58955]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-10T13:36:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T13:36:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[58955]: Deploying daemon osd.3 on vm05 2026-03-10T13:36:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[51512]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[51512]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-10T13:36:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T13:36:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:53 vm05 ceph-mon[51512]: Deploying daemon osd.3 on vm05 2026-03-10T13:36:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:53 vm09 ceph-mon[53367]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:53 vm09 ceph-mon[53367]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-10T13:36:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:53 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T13:36:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:53 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:53 vm09 ceph-mon[53367]: Deploying daemon osd.3 on vm05 2026-03-10T13:36:56.223 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[58955]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[51512]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:56.224 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:55 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:55 vm09 ceph-mon[53367]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:55 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:55 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:55 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.071+0000 7f7ccdffb640 1 -- 192.168.123.105:0/2965872546 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 377247425) 0x7f7ca4002bf0 con 0x7f7ca0078440 2026-03-10T13:36:57.075 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 3 on host 'vm05' 2026-03-10T13:36:57.076 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.075+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 >> v1:192.168.123.105:6800/3845654103 conn(0x7f7ca0078440 legacy=0x7f7ca007a900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:57.076 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.075+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 >> v1:192.168.123.109:6789/0 conn(0x7f7cd81047a0 legacy=0x7f7cd819bce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:57.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.077+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 shutdown_connections 2026-03-10T13:36:57.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.077+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 >> 192.168.123.105:0/2965872546 conn(0x7f7cd80fff30 msgr2=0x7f7cd810b750 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:57.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.077+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 shutdown_connections 2026-03-10T13:36:57.078 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:36:57.078+0000 7f7cddaf9640 1 -- 192.168.123.105:0/2965872546 wait complete. 2026-03-10T13:36:57.230 DEBUG:teuthology.orchestra.run.vm05:osd.3> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.3.service 2026-03-10T13:36:57.231 INFO:tasks.cephadm:Deploying osd.4 on vm09 with /dev/vde... 2026-03-10T13:36:57.231 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vde 2026-03-10T13:36:57.393 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.508 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:58.376 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:36:58.394 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm09:/dev/vde 2026-03-10T13:36:58.566 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.704+0000 7fe313577640 1 -- 192.168.123.109:0/425885738 >> v1:192.168.123.109:6789/0 conn(0x7fe314108c70 legacy=0x7fe31410b130 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 -- 192.168.123.109:0/425885738 shutdown_connections 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 -- 192.168.123.109:0/425885738 >> 192.168.123.109:0/425885738 conn(0x7fe3140fbe50 msgr2=0x7fe3140fe2b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 -- 192.168.123.109:0/425885738 shutdown_connections 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 -- 192.168.123.109:0/425885738 wait complete. 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 Processor -- start 2026-03-10T13:36:58.705 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.705+0000 7fe313577640 1 -- start start 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe313577640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe31406e210 con 0x7fe314100cf0 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe313577640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe3141b5100 con 0x7fe314108c70 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe313577640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe3141b62e0 con 0x7fe314104f60 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe312d76640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fe314108c70 0x7fe3141b29d0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:58420/0 (socket says 192.168.123.109:58420) 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe312d76640 1 -- 192.168.123.109:0/3653481721 learned_addr learned my addr 192.168.123.109:0/3653481721 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4098752511 0 0) 0x7fe3141b5100 con 0x7fe314108c70 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe2e4003620 con 0x7fe314108c70 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2736859900 0 0) 0x7fe31406e210 con 0x7fe314100cf0 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe3141b5100 con 0x7fe314100cf0 2026-03-10T13:36:58.706 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2340529789 0 0) 0x7fe3141b62e0 con 0x7fe314104f60 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.706+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe31406e210 con 0x7fe314104f60 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3018559951 0 0) 0x7fe2e4003620 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe3141b62e0 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe304002fc0 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4039929962 0 0) 0x7fe31406e210 con 0x7fe314104f60 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe2e4003620 con 0x7fe314104f60 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 674231165 0 0) 0x7fe3141b62e0 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 >> v1:192.168.123.105:6790/0 conn(0x7fe314104f60 legacy=0x7fe314073b70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 >> v1:192.168.123.105:6789/0 conn(0x7fe314100cf0 legacy=0x7fe31406d690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe3141b74c0 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe313577640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe3141b6510 con 0x7fe314108c70 2026-03-10T13:36:58.707 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.707+0000 7fe313577640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe3141b6aa0 con 0x7fe314108c70 2026-03-10T13:36:58.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.708+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fe304004af0 con 0x7fe314108c70 2026-03-10T13:36:58.708 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.708+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe3040051b0 con 0x7fe314108c70 2026-03-10T13:36:58.709 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.709+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7fe304005370 con 0x7fe314108c70 2026-03-10T13:36:58.709 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.709+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe31410d3e0 con 0x7fe314108c70 2026-03-10T13:36:58.711 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.710+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(22..22 src has 1..22) ==== 2637+0+0 (unknown 1307515677 0 0) 0x7fe304094660 con 0x7fe314108c70 2026-03-10T13:36:58.712 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.712+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fe30405f0b0 con 0x7fe314108c70 2026-03-10T13:36:58.805 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:36:58.805+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7fe3141b7060 con 0x7fe2e4078440 2026-03-10T13:36:58.870 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T13:36:59.161 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: osdmap e22: 4 total, 3 up, 4 in 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: Detected new or changed devices on vm05 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:59.162 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:58 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: osdmap e22: 4 total, 3 up, 4 in 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: Detected new or changed devices on vm05 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: osdmap e22: 4 total, 3 up, 4 in 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: Detected new or changed devices on vm05 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:58 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:36:59.332 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 13:36:58 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3[79227]: 2026-03-10T13:36:58.865+0000 7fdfe6874640 -1 osd.3 0 waiting for initial osdmap 2026-03-10T13:36:59.332 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 13:36:58 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3[79227]: 2026-03-10T13:36:58.871+0000 7fdfe1e8b640 -1 osd.3 23 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='client.24211 v1:192.168.123.109:0/3653481721' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: osdmap e23: 4 total, 3 up, 4 in 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.067 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/3482564092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='client.24217 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='client.24217 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]': finished 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: osd.3 v1:192.168.123.105:6813/693788844 boot 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.068 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:36:59 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='client.24211 v1:192.168.123.109:0/3653481721' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: osdmap e23: 4 total, 3 up, 4 in 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/3482564092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='client.24217 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='client.24217 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]': finished 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: osd.3 v1:192.168.123.105:6813/693788844 boot 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='client.24211 v1:192.168.123.109:0/3653481721' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='osd.3 v1:192.168.123.105:6813/693788844' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: osdmap e23: 4 total, 3 up, 4 in 2026-03-10T13:37:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/3482564092' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='client.24217 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]: dispatch 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='client.24217 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "72d4c584-8c2a-4a71-a3f3-b3a23f142206"}]': finished 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: osd.3 v1:192.168.123.105:6813/693788844 boot 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: osdmap e24: 5 total, 4 up, 5 in 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:37:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:36:59 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:00 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:37:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:00 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:37:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/4252123421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/4252123421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:37:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/4252123421' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:01 vm09 ceph-mon[53367]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:01 vm09 ceph-mon[53367]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T13:37:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:01 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[58955]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[58955]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[51512]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[51512]: osdmap e25: 5 total, 4 up, 5 in 2026-03-10T13:37:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:01 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:04.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:03 vm09 ceph-mon[53367]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:04.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T13:37:04.052 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[58955]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[51512]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T13:37:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:04.965 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:04 vm09 ceph-mon[53367]: Deploying daemon osd.4 on vm09 2026-03-10T13:37:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:04 vm05 ceph-mon[58955]: Deploying daemon osd.4 on vm09 2026-03-10T13:37:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:04 vm05 ceph-mon[51512]: Deploying daemon osd.4 on vm09 2026-03-10T13:37:05.899 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:05 vm09 ceph-mon[53367]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:05 vm05 ceph-mon[58955]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:05 vm05 ceph-mon[51512]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:07.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:06 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:06 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:07.375 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 4 on host 'vm09' 2026-03-10T13:37:07.375 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.375+0000 7fe2fb7fe640 1 -- 192.168.123.109:0/3653481721 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 715239282) 0x7fe3141b7060 con 0x7fe2e4078440 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.377+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 >> v1:192.168.123.105:6800/3845654103 conn(0x7fe2e4078440 legacy=0x7fe2e407a900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.377+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 >> v1:192.168.123.109:6789/0 conn(0x7fe314108c70 legacy=0x7fe3141b29d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.377+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 shutdown_connections 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.377+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 >> 192.168.123.109:0/3653481721 conn(0x7fe3140fbe50 msgr2=0x7fe314108550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.378+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 shutdown_connections 2026-03-10T13:37:07.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:07.378+0000 7fe2f97fa640 1 -- 192.168.123.109:0/3653481721 wait complete. 2026-03-10T13:37:07.527 DEBUG:teuthology.orchestra.run.vm09:osd.4> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.4.service 2026-03-10T13:37:07.528 INFO:tasks.cephadm:Deploying osd.5 on vm09 with /dev/vdd... 2026-03-10T13:37:07.528 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdd 2026-03-10T13:37:07.873 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:08.173 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 13:37:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T13:37:07.866+0000 7f6b2a5d0740 -1 osd.4 0 log_to_monitors true 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:08.527 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:08 vm09 ceph-mon[53367]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[51512]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:08 vm05 ceph-mon[58955]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T13:37:09.327 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:37:09.342 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm09:/dev/vdd 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:09.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:09 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.507 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:09.644 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.643+0000 7f252b4a1640 1 -- 192.168.123.109:0/3278644269 >> v1:192.168.123.109:6789/0 conn(0x7f2524077040 legacy=0x7f25240754a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:09.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.645+0000 7f252b4a1640 1 -- 192.168.123.109:0/3278644269 shutdown_connections 2026-03-10T13:37:09.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.645+0000 7f252b4a1640 1 -- 192.168.123.109:0/3278644269 >> 192.168.123.109:0/3278644269 conn(0x7f25240fde70 msgr2=0x7f25241002d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:09.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.645+0000 7f252b4a1640 1 -- 192.168.123.109:0/3278644269 shutdown_connections 2026-03-10T13:37:09.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.645+0000 7f252b4a1640 1 -- 192.168.123.109:0/3278644269 wait complete. 2026-03-10T13:37:09.645 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f252b4a1640 1 Processor -- start 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f252b4a1640 1 -- start start 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f252b4a1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2524102c00 con 0x7f2524077040 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f252b4a1640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2524102dd0 con 0x7f252410cd80 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f252b4a1640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2524102fa0 con 0x7f2524075bb0 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f2529216640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f2524075bb0 0x7f25241059b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:55156/0 (socket says 192.168.123.109:55156) 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f2529216640 1 -- 192.168.123.109:0/1946832143 learned_addr learned my addr 192.168.123.109:0/1946832143 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4038551374 0 0) 0x7f2524102c00 con 0x7f2524077040 2026-03-10T13:37:09.646 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.646+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f24fc003620 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2284601973 0 0) 0x7f2524102dd0 con 0x7f252410cd80 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2524102c00 con 0x7f252410cd80 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3226920057 0 0) 0x7f24fc003620 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2524102dd0 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2514003140 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 918462070 0 0) 0x7f2524102dd0 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 >> v1:192.168.123.105:6790/0 conn(0x7f2524075bb0 legacy=0x7f25241059b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 >> v1:192.168.123.109:6789/0 conn(0x7f252410cd80 legacy=0x7f25241aa150 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f25241ae8a0 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f25241ab870 con 0x7f2524077040 2026-03-10T13:37:09.647 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.647+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f25241abe20 con 0x7f2524077040 2026-03-10T13:37:09.648 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.648+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f2514003b60 con 0x7f2524077040 2026-03-10T13:37:09.648 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.648+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2514004b50 con 0x7f2524077040 2026-03-10T13:37:09.648 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.648+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f24ec005180 con 0x7f2524077040 2026-03-10T13:37:09.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.649+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f2514003710 con 0x7f2524077040 2026-03-10T13:37:09.649 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.649+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(27..27 src has 1..27) ==== 3045+0+0 (unknown 612547620 0 0) 0x7f25140932c0 con 0x7f2524077040 2026-03-10T13:37:09.651 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.651+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f251405ced0 con 0x7f2524077040 2026-03-10T13:37:09.749 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:09.749+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f24ec002bf0 con 0x7f24fc0782a0 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: osdmap e26: 5 total, 4 up, 5 in 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='osd.4 v1:192.168.123.109:6800/3898346219' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:09 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: Detected new or changed devices on vm09 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='osd.4 ' entity='osd.4' 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: osd.4 v1:192.168.123.109:6800/3898346219 boot 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: osdmap e28: 5 total, 5 up, 5 in 2026-03-10T13:37:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:10 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:10.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 13:37:10 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T13:37:10.374+0000 7f6b26551640 -1 osd.4 0 waiting for initial osdmap 2026-03-10T13:37:10.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 13:37:10 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T13:37:10.382+0000 7f6b21b7a640 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: Detected new or changed devices on vm09 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='osd.4 ' entity='osd.4' 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: osd.4 v1:192.168.123.109:6800/3898346219 boot 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: osdmap e28: 5 total, 5 up, 5 in 2026-03-10T13:37:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: Detected new or changed devices on vm09 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: Adjusting osd_memory_target on vm09 to 257.0M 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: Unable to set osd_memory_target on vm09 to 269536460: error parsing value: Value '269536460' is below minimum 939524096 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: osdmap e27: 5 total, 4 up, 5 in 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='osd.4 ' entity='osd.4' 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: osd.4 v1:192.168.123.109:6800/3898346219 boot 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: osdmap e28: 5 total, 5 up, 5 in 2026-03-10T13:37:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:10 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='client.14349 v1:192.168.123.109:0/1946832143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: osdmap e29: 5 total, 5 up, 5 in 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/2326713458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='client.24244 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='client.24244 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]': finished 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/1946069709' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='client.14349 v1:192.168.123.109:0/1946832143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: osdmap e29: 5 total, 5 up, 5 in 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/2326713458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='client.24244 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='client.24244 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]': finished 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/1946069709' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='client.14349 v1:192.168.123.109:0/1946832143' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: osdmap e29: 5 total, 5 up, 5 in 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/2326713458' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='client.24244 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]: dispatch 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='client.24244 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dba319a5-a2e5-417f-b334-ac4bdbd6a2aa"}]': finished 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: osdmap e30: 6 total, 5 up, 6 in 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/1946069709' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[58955]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[58955]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[51512]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[51512]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T13:37:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:12 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:12 vm09 ceph-mon[53367]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:12 vm09 ceph-mon[53367]: osdmap e31: 6 total, 5 up, 6 in 2026-03-10T13:37:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:12 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:13.829 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:13 vm09 ceph-mon[53367]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:13 vm05 ceph-mon[58955]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:14.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:13 vm05 ceph-mon[51512]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail 2026-03-10T13:37:15.865 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:15 vm09 ceph-mon[53367]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail; 109 KiB/s, 0 objects/s recovering 2026-03-10T13:37:15.866 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:15 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T13:37:15.866 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:15 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:15.866 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:15 vm09 ceph-mon[53367]: Deploying daemon osd.5 on vm09 2026-03-10T13:37:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[58955]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail; 109 KiB/s, 0 objects/s recovering 2026-03-10T13:37:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T13:37:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[58955]: Deploying daemon osd.5 on vm09 2026-03-10T13:37:16.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[51512]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 377 MiB used, 100 GiB / 100 GiB avail; 109 KiB/s, 0 objects/s recovering 2026-03-10T13:37:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T13:37:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:15 vm05 ceph-mon[51512]: Deploying daemon osd.5 on vm09 2026-03-10T13:37:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:17 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:17 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.206 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 5 on host 'vm09' 2026-03-10T13:37:18.206 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.206+0000 7f25127fc640 1 -- 192.168.123.109:0/1946832143 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 1967485741) 0x7f24ec002bf0 con 0x7f24fc0782a0 2026-03-10T13:37:18.208 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.208+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 >> v1:192.168.123.105:6800/3845654103 conn(0x7f24fc0782a0 legacy=0x7f24fc07a760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:18.208 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.208+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 >> v1:192.168.123.105:6789/0 conn(0x7f2524077040 legacy=0x7f25241024f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:18.208 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.208+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 shutdown_connections 2026-03-10T13:37:18.208 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.208+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 >> 192.168.123.109:0/1946832143 conn(0x7f25240fde70 msgr2=0x7f25240ff900 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:18.209 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.209+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 shutdown_connections 2026-03-10T13:37:18.209 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:18.209+0000 7f252b4a1640 1 -- 192.168.123.109:0/1946832143 wait complete. 2026-03-10T13:37:18.363 DEBUG:teuthology.orchestra.run.vm09:osd.5> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.5.service 2026-03-10T13:37:18.364 INFO:tasks.cephadm:Deploying osd.6 on vm09 with /dev/vdc... 2026-03-10T13:37:18.365 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdc 2026-03-10T13:37:18.603 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:18 vm09 ceph-mon[53367]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:18.603 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:18 vm09 ceph-mon[53367]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:18.603 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:18 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:18.603 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:18 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.603 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:18 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:18.657 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[58955]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[58955]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[51512]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[51512]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:18 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: Detected new or changed devices on vm09 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:19 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: Detected new or changed devices on vm09 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: osdmap e32: 6 total, 5 up, 6 in 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='osd.5 v1:192.168.123.109:6804/452558008' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: Detected new or changed devices on vm09 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: Adjusting osd_memory_target on vm09 to 128.5M 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: Unable to set osd_memory_target on vm09 to 134768230: error parsing value: Value '134768230' is below minimum 939524096 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:19 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:20.146 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:37:20.163 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm09:/dev/vdc 2026-03-10T13:37:20.332 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:20.466 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.466+0000 7f28df749640 1 -- 192.168.123.109:0/2087867596 >> v1:192.168.123.109:6789/0 conn(0x7f28d8108dc0 legacy=0x7f28d810b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:20.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.467+0000 7f28df749640 1 -- 192.168.123.109:0/2087867596 shutdown_connections 2026-03-10T13:37:20.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.467+0000 7f28df749640 1 -- 192.168.123.109:0/2087867596 >> 192.168.123.109:0/2087867596 conn(0x7f28d8100120 msgr2=0x7f28d8102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:20.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.467+0000 7f28df749640 1 -- 192.168.123.109:0/2087867596 shutdown_connections 2026-03-10T13:37:20.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.467+0000 7f28df749640 1 -- 192.168.123.109:0/2087867596 wait complete. 2026-03-10T13:37:20.467 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.467+0000 7f28df749640 1 Processor -- start 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28df749640 1 -- start start 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28df749640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f28d8074be0 con 0x7f28d8104990 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28df749640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f28d8074db0 con 0x7f28d8108dc0 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28df749640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f28d8074f80 con 0x7f28d810cad0 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28ddcbf640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f28d810cad0 0x7f28d80734c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:40848/0 (socket says 192.168.123.109:40848) 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.468+0000 7f28ddcbf640 1 -- 192.168.123.109:0/2238789367 learned_addr learned my addr 192.168.123.109:0/2238789367 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:20.468 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4574497 0 0) 0x7f28d8074f80 con 0x7f28d810cad0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f28b4003620 con 0x7f28d810cad0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1339876032 0 0) 0x7f28d8074be0 con 0x7f28d8104990 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f28d8074f80 con 0x7f28d8104990 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4218275773 0 0) 0x7f28b4003620 con 0x7f28d810cad0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f28d8074be0 con 0x7f28d810cad0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f28d4003520 con 0x7f28d810cad0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1265673602 0 0) 0x7f28d8074db0 con 0x7f28d8108dc0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f28b4003620 con 0x7f28d8108dc0 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2588863459 0 0) 0x7f28d8074f80 con 0x7f28d8104990 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f28d8074db0 con 0x7f28d8104990 2026-03-10T13:37:20.469 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.469+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f28c0002fe0 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 22444111 0 0) 0x7f28d8074db0 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 >> v1:192.168.123.105:6790/0 conn(0x7f28d810cad0 legacy=0x7f28d80734c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 >> v1:192.168.123.109:6789/0 conn(0x7f28d8108dc0 legacy=0x7f28d8077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f28d8078010 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f28d80782a0 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f28d81ac2b0 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f28c0004540 con 0x7f28d8104990 2026-03-10T13:37:20.470 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.470+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f28c00049f0 con 0x7f28d8104990 2026-03-10T13:37:20.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.472+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f28c0003180 con 0x7f28d8104990 2026-03-10T13:37:20.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.472+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(33..33 src has 1..33) ==== 3337+0+0 (unknown 684758010 0 0) 0x7f28c0093080 con 0x7f28d8104990 2026-03-10T13:37:20.472 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.471+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f28d81124f0 con 0x7f28d8104990 2026-03-10T13:37:20.476 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.476+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f28c005cb70 con 0x7f28d8104990 2026-03-10T13:37:20.608 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:20.608+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f28d810a920 con 0x7f28b4078100 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: osdmap e33: 6 total, 5 up, 6 in 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: osdmap e34: 6 total, 5 up, 6 in 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:20.801 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='osd.5 ' entity='osd.5' 2026-03-10T13:37:20.802 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:20.802 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:20.802 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:20 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:20.802 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 13:37:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T13:37:20.585+0000 7f5b32a1c640 -1 osd.5 0 waiting for initial osdmap 2026-03-10T13:37:20.802 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 13:37:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T13:37:20.596+0000 7f5b2d832640 -1 osd.5 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: osdmap e33: 6 total, 5 up, 6 in 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: osdmap e34: 6 total, 5 up, 6 in 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='osd.5 ' entity='osd.5' 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: osdmap e33: 6 total, 5 up, 6 in 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: osdmap e34: 6 total, 5 up, 6 in 2026-03-10T13:37:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='osd.5 ' entity='osd.5' 2026-03-10T13:37:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:20 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:21.700 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:37:21.700 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:37:21.700 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:21.700 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='client.14373 v1:192.168.123.109:0/2238789367' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:21.700 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/61982352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='client.24278 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='client.24278 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]': finished 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: osd.5 v1:192.168.123.109:6804/452558008 boot 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:21.701 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:21 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='client.14373 v1:192.168.123.109:0/2238789367' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/61982352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='client.24278 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='client.24278 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]': finished 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: osd.5 v1:192.168.123.109:6804/452558008 boot 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='client.14373 v1:192.168.123.109:0/2238789367' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/61982352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='client.24278 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]: dispatch 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='client.24278 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afe42148-806c-4ff6-9729-634661c10d48"}]': finished 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: osd.5 v1:192.168.123.109:6804/452558008 boot 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: osdmap e35: 7 total, 6 up, 7 in 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:37:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:21 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/3299388409' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:22 vm09 ceph-mon[53367]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T13:37:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:22 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/3299388409' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[58955]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/3299388409' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[51512]: osdmap e36: 7 total, 6 up, 7 in 2026-03-10T13:37:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:22 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:24 vm09 ceph-mon[53367]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:24 vm09 ceph-mon[53367]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T13:37:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:24 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[58955]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[58955]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[51512]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[51512]: osdmap e37: 7 total, 6 up, 7 in 2026-03-10T13:37:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:24 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:26.514 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:26 vm09 ceph-mon[53367]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:26.514 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T13:37:26.514 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:26 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[58955]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[51512]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T13:37:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:26 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:27.648 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:27 vm09 ceph-mon[53367]: Deploying daemon osd.6 on vm09 2026-03-10T13:37:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:27 vm05 ceph-mon[58955]: Deploying daemon osd.6 on vm09 2026-03-10T13:37:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:27 vm05 ceph-mon[51512]: Deploying daemon osd.6 on vm09 2026-03-10T13:37:28.470 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:28 vm09 ceph-mon[53367]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:28.471 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:28 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:28.471 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:28 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:28.471 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:28 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[51512]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[58955]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:28.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:28 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.172 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.172+0000 7f28ce7fc640 1 -- 192.168.123.109:0/2238789367 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 2506627020) 0x7f28d810a920 con 0x7f28b4078100 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 6 on host 'vm09' 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.174+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 >> v1:192.168.123.105:6800/3845654103 conn(0x7f28b4078100 legacy=0x7f28b407a5c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.174+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 >> v1:192.168.123.105:6789/0 conn(0x7f28d8104990 legacy=0x7f28d807adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.175+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 shutdown_connections 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.175+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 >> 192.168.123.109:0/2238789367 conn(0x7f28d8100120 msgr2=0x7f28d8109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.175+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 shutdown_connections 2026-03-10T13:37:29.175 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:29.176+0000 7f28df749640 1 -- 192.168.123.109:0/2238789367 wait complete. 2026-03-10T13:37:29.322 DEBUG:teuthology.orchestra.run.vm09:osd.6> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.6.service 2026-03-10T13:37:29.324 INFO:tasks.cephadm:Deploying osd.7 on vm09 with /dev/vdb... 2026-03-10T13:37:29.324 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- lvm zap /dev/vdb 2026-03-10T13:37:29.647 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.891 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:29 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:29.892 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 13:37:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T13:37:29.632+0000 7f4624beb740 -1 osd.6 0 log_to_monitors true 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:29 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: Detected new or changed devices on vm09 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:30 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.065 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: Detected new or changed devices on vm09 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: Detected new or changed devices on vm09 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: Adjusting osd_memory_target on vm09 to 87739k 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: Unable to set osd_memory_target on vm09 to 89845486: error parsing value: Value '89845486' is below minimum 939524096 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:30 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:31.084 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch daemon add osd vm09:/dev/vdb 2026-03-10T13:37:31.246 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.375+0000 7f4d851d0640 1 -- 192.168.123.109:0/3546220926 <== mon.1 v1:192.168.123.109:6789/0 5 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4d6c004730 con 0x7f4d80108dc0 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.376+0000 7f4d8845d640 1 -- 192.168.123.109:0/3546220926 >> v1:192.168.123.109:6789/0 conn(0x7f4d80108dc0 legacy=0x7f4d8010b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.376+0000 7f4d8845d640 1 -- 192.168.123.109:0/3546220926 shutdown_connections 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.376+0000 7f4d8845d640 1 -- 192.168.123.109:0/3546220926 >> 192.168.123.109:0/3546220926 conn(0x7f4d80100120 msgr2=0x7f4d80102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.376+0000 7f4d8845d640 1 -- 192.168.123.109:0/3546220926 shutdown_connections 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.376+0000 7f4d8845d640 1 -- 192.168.123.109:0/3546220926 wait complete. 2026-03-10T13:37:31.376 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d8845d640 1 Processor -- start 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d8845d640 1 -- start start 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d8845d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4d8019c810 con 0x7f4d8010cad0 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d8845d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4d801a7fd0 con 0x7f4d80104990 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d8845d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f4d801a91b0 con 0x7f4d80108dc0 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d861d2640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f4d80104990 0x7f4d8019bc90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:53682/0 (socket says 192.168.123.109:53682) 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d861d2640 1 -- 192.168.123.109:0/577402345 learned_addr learned my addr 192.168.123.109:0/577402345 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3985605976 0 0) 0x7f4d801a91b0 con 0x7f4d80108dc0 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4d54003620 con 0x7f4d80108dc0 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 60982165 0 0) 0x7f4d8019c810 con 0x7f4d8010cad0 2026-03-10T13:37:31.377 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.377+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4d801a91b0 con 0x7f4d8010cad0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2454246632 0 0) 0x7f4d801a7fd0 con 0x7f4d80104990 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4d8019c810 con 0x7f4d80104990 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 692564754 0 0) 0x7f4d54003620 con 0x7f4d80108dc0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4d801a7fd0 con 0x7f4d80108dc0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 967035264 0 0) 0x7f4d801a91b0 con 0x7f4d8010cad0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4d54003620 con 0x7f4d8010cad0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4d6c004610 con 0x7f4d80108dc0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4d7c003400 con 0x7f4d8010cad0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2841478731 0 0) 0x7f4d801a7fd0 con 0x7f4d80108dc0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 >> v1:192.168.123.109:6789/0 conn(0x7f4d80104990 legacy=0x7f4d8019bc90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 >> v1:192.168.123.105:6789/0 conn(0x7f4d8010cad0 legacy=0x7f4d801a58a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4d801aa390 con 0x7f4d80108dc0 2026-03-10T13:37:31.378 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f4d801a7020 con 0x7f4d80108dc0 2026-03-10T13:37:31.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.378+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f4d801a75b0 con 0x7f4d80108dc0 2026-03-10T13:37:31.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.379+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4d6c0028f0 con 0x7f4d80108dc0 2026-03-10T13:37:31.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.379+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4d50005180 con 0x7f4d80108dc0 2026-03-10T13:37:31.379 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.379+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4d6c004bd0 con 0x7f4d80108dc0 2026-03-10T13:37:31.381 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.380+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f4d6c003dc0 con 0x7f4d80108dc0 2026-03-10T13:37:31.381 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.381+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(38..38 src has 1..38) ==== 3613+0+0 (unknown 3493889280 0 0) 0x7f4d6c093470 con 0x7f4d80108dc0 2026-03-10T13:37:31.382 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.382+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f4d6c05ce50 con 0x7f4d80108dc0 2026-03-10T13:37:31.475 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:31.475+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f4d50002bf0 con 0x7f4d540781c0 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: osdmap e38: 7 total, 6 up, 7 in 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:31.985 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:31.986 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:31 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: osdmap e38: 7 total, 6 up, 7 in 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: osdmap e38: 7 total, 6 up, 7 in 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='osd.6 v1:192.168.123.109:6808/354656606' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:37:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:37:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:31 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 13:37:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T13:37:32.694+0000 7f4620b6c640 -1 osd.6 0 waiting for initial osdmap 2026-03-10T13:37:32.775 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 13:37:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T13:37:32.706+0000 7f461c195640 -1 osd.6 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='client.24293 v1:192.168.123.109:0/577402345' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: osdmap e39: 7 total, 6 up, 7 in 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='client.24298 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/1161184014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='client.24298 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]': finished 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: osdmap e40: 8 total, 6 up, 8 in 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:32.775 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:32 vm09 ceph-mon[53367]: from='osd.6 ' entity='osd.6' 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='client.24293 v1:192.168.123.109:0/577402345' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: osdmap e39: 7 total, 6 up, 7 in 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='client.24298 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/1161184014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='client.24298 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]': finished 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: osdmap e40: 8 total, 6 up, 8 in 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[58955]: from='osd.6 ' entity='osd.6' 2026-03-10T13:37:33.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='client.24293 v1:192.168.123.109:0/577402345' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: osdmap e39: 7 total, 6 up, 7 in 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='client.24298 ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/1161184014' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='client.24298 ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "902bce05-1aee-4630-a57d-74b141285652"}]': finished 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: osdmap e40: 8 total, 6 up, 8 in 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:33.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:32 vm05 ceph-mon[51512]: from='osd.6 ' entity='osd.6' 2026-03-10T13:37:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/1182081469' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: osd.6 v1:192.168.123.109:6808/354656606 boot 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/1182081469' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: osd.6 v1:192.168.123.109:6808/354656606 boot 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:33 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/1182081469' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: osd.6 v1:192.168.123.109:6808/354656606 boot 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: osdmap e41: 8 total, 7 up, 8 in 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:37:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:33 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:35.667 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:35 vm09 ceph-mon[53367]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T13:37:35.668 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:35 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:35.668 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:35 vm09 ceph-mon[53367]: pgmap v86: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[58955]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[58955]: pgmap v86: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[51512]: osdmap e42: 8 total, 7 up, 8 in 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:35 vm05 ceph-mon[51512]: pgmap v86: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:36.591 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:36 vm09 ceph-mon[53367]: osdmap e43: 8 total, 7 up, 8 in 2026-03-10T13:37:36.591 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:36 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:36 vm05 ceph-mon[58955]: osdmap e43: 8 total, 7 up, 8 in 2026-03-10T13:37:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:36 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:36 vm05 ceph-mon[51512]: osdmap e43: 8 total, 7 up, 8 in 2026-03-10T13:37:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:36 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:37.657 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:37 vm09 ceph-mon[53367]: pgmap v88: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:37.657 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T13:37:37.657 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:37 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:37.657 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:37 vm09 ceph-mon[53367]: Deploying daemon osd.7 on vm09 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[58955]: pgmap v88: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[58955]: Deploying daemon osd.7 on vm09 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[51512]: pgmap v88: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:37 vm05 ceph-mon[51512]: Deploying daemon osd.7 on vm09 2026-03-10T13:37:39.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:39 vm09 ceph-mon[53367]: pgmap v89: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:39.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:39 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:39.609 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:39 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:39.889 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:39 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[51512]: pgmap v89: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[58955]: pgmap v89: 1 pgs: 1 remapped; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:39 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.352 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 7 on host 'vm09' 2026-03-10T13:37:40.352 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.351+0000 7f4d6b7fe640 1 -- 192.168.123.109:0/577402345 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (unknown 0 0 3398224787) 0x7f4d50002bf0 con 0x7f4d540781c0 2026-03-10T13:37:40.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.354+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 >> v1:192.168.123.105:6800/3845654103 conn(0x7f4d540781c0 legacy=0x7f4d5407a680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.354+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 >> v1:192.168.123.105:6790/0 conn(0x7f4d80108dc0 legacy=0x7f4d801a2170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.355+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 shutdown_connections 2026-03-10T13:37:40.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.355+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 >> 192.168.123.109:0/577402345 conn(0x7f4d80100120 msgr2=0x7f4d8010b210 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:40.354 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.355+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 shutdown_connections 2026-03-10T13:37:40.355 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:40.355+0000 7f4d8845d640 1 -- 192.168.123.109:0/577402345 wait complete. 2026-03-10T13:37:40.514 DEBUG:teuthology.orchestra.run.vm09:osd.7> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.7.service 2026-03-10T13:37:40.516 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T13:37:40.516 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd stat -f json 2026-03-10T13:37:40.694 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.787 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:40 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.812 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:40 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.835+0000 7f589b0ad640 1 -- 192.168.123.105:0/979991456 >> v1:192.168.123.105:6790/0 conn(0x7f5894108de0 legacy=0x7f589410b230 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.836+0000 7f589b0ad640 1 -- 192.168.123.105:0/979991456 shutdown_connections 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.836+0000 7f589b0ad640 1 -- 192.168.123.105:0/979991456 >> 192.168.123.105:0/979991456 conn(0x7f5894100120 msgr2=0x7f5894102580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.836+0000 7f589b0ad640 1 -- 192.168.123.105:0/979991456 shutdown_connections 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.836+0000 7f589b0ad640 1 -- 192.168.123.105:0/979991456 wait complete. 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.837+0000 7f589b0ad640 1 Processor -- start 2026-03-10T13:37:40.837 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.837+0000 7f589b0ad640 1 -- start start 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.837+0000 7f589b0ad640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f589419c610 con 0x7f589410caf0 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.837+0000 7f589b0ad640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f58941a7dd0 con 0x7f58941049b0 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.837+0000 7f589b0ad640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f58941a8fb0 con 0x7f5894108de0 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5899623640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f589410caf0 0x7f58941a56a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34272/0 (socket says 192.168.123.105:34272) 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5899623640 1 -- 192.168.123.105:0/1804336755 learned_addr learned my addr 192.168.123.105:0/1804336755 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:40.838 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2956193538 0 0) 0x7f589419c610 con 0x7f589410caf0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5868003620 con 0x7f589410caf0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 262176172 0 0) 0x7f58941a7dd0 con 0x7f58941049b0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f589419c610 con 0x7f58941049b0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2817785450 0 0) 0x7f58941a8fb0 con 0x7f5894108de0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.838+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f58941a7dd0 con 0x7f5894108de0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3061135333 0 0) 0x7f58941a7dd0 con 0x7f5894108de0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f58941a8fb0 con 0x7f5894108de0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3622041035 0 0) 0x7f589419c610 con 0x7f58941049b0 2026-03-10T13:37:40.839 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f58941a7dd0 con 0x7f58941049b0 2026-03-10T13:37:40.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5884002d90 con 0x7f5894108de0 2026-03-10T13:37:40.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5880002fb0 con 0x7f58941049b0 2026-03-10T13:37:40.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4046960308 0 0) 0x7f58941a8fb0 con 0x7f5894108de0 2026-03-10T13:37:40.840 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 >> v1:192.168.123.109:6789/0 conn(0x7f58941049b0 legacy=0x7f589419ba90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 >> v1:192.168.123.105:6789/0 conn(0x7f589410caf0 legacy=0x7f58941a56a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f58941aa190 con 0x7f5894108de0 2026-03-10T13:37:40.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f58941a8000 con 0x7f5894108de0 2026-03-10T13:37:40.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.839+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f58941a85e0 con 0x7f5894108de0 2026-03-10T13:37:40.841 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.840+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5884003a10 con 0x7f5894108de0 2026-03-10T13:37:40.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.840+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5884004ec0 con 0x7f5894108de0 2026-03-10T13:37:40.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.841+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f585c005180 con 0x7f5894108de0 2026-03-10T13:37:40.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.842+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f588401d7d0 con 0x7f5894108de0 2026-03-10T13:37:40.844 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.842+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(43..43 src has 1..43) ==== 3884+0+0 (unknown 612892776 0 0) 0x7f58840943a0 con 0x7f5894108de0 2026-03-10T13:37:40.845 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.844+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f588405de10 con 0x7f5894108de0 2026-03-10T13:37:40.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.940+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f585c005470 con 0x7f5894108de0 2026-03-10T13:37:40.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.942+0000 7f5891ffb640 1 -- 192.168.123.105:0/1804336755 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v43) ==== 74+0+130 (unknown 3384411206 0 3446764199) 0x7f5884061ac0 con 0x7f5894108de0 2026-03-10T13:37:40.942 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.944+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 >> v1:192.168.123.105:6800/3845654103 conn(0x7f5868078230 legacy=0x7f586807a6f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.944+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 >> v1:192.168.123.105:6790/0 conn(0x7f5894108de0 legacy=0x7f58941a1f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.945+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 shutdown_connections 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.945+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 >> 192.168.123.105:0/1804336755 conn(0x7f5894100120 msgr2=0x7f589410b960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.945+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 shutdown_connections 2026-03-10T13:37:40.945 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:40.945+0000 7f589b0ad640 1 -- 192.168.123.105:0/1804336755 wait complete. 2026-03-10T13:37:41.115 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":43,"num_osds":8,"num_up_osds":7,"osd_up_since":1773149853,"num_in_osds":8,"osd_in_since":1773149852,"num_remapped_pgs":0} 2026-03-10T13:37:41.480 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 13:37:41 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T13:37:41.226+0000 7fe137f53740 -1 osd.7 0 log_to_monitors true 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1804336755' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1804336755' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:41 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.116 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd stat -f json 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1804336755' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:41 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:42.295 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.441+0000 7fefe7fa8640 1 -- 192.168.123.105:0/3152619710 >> v1:192.168.123.109:6789/0 conn(0x7fefe0102700 legacy=0x7fefe0102b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.442+0000 7fefe7fa8640 1 -- 192.168.123.105:0/3152619710 shutdown_connections 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.442+0000 7fefe7fa8640 1 -- 192.168.123.105:0/3152619710 >> 192.168.123.105:0/3152619710 conn(0x7fefe00fde70 msgr2=0x7fefe01002d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.442+0000 7fefe7fa8640 1 -- 192.168.123.105:0/3152619710 shutdown_connections 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.442+0000 7fefe7fa8640 1 -- 192.168.123.105:0/3152619710 wait complete. 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe7fa8640 1 Processor -- start 2026-03-10T13:37:42.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe7fa8640 1 -- start start 2026-03-10T13:37:42.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe7fa8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fefe010dd80 con 0x7fefe0106b30 2026-03-10T13:37:42.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe7fa8640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fefe01a38d0 con 0x7fefe010a840 2026-03-10T13:37:42.444 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe7fa8640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fefe01a4ab0 con 0x7fefe0102700 2026-03-10T13:37:42.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe5d1d640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fefe0102700 0x7fefe010d240 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:33604/0 (socket says 192.168.123.105:33604) 2026-03-10T13:37:42.446 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefe5d1d640 1 -- 192.168.123.105:0/476207875 learned_addr learned my addr 192.168.123.105:0/476207875 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.443+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 404238047 0 0) 0x7fefe01a4ab0 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fefb0003620 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 724832022 0 0) 0x7fefb0003620 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fefe01a4ab0 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fefd40034e0 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4091611160 0 0) 0x7fefe01a4ab0 con 0x7fefe0102700 2026-03-10T13:37:42.447 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 >> v1:192.168.123.109:6789/0 conn(0x7fefe010a840 legacy=0x7fefe01a11a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 >> v1:192.168.123.105:6789/0 conn(0x7fefe0106b30 legacy=0x7fefe019da70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fefe01a5c90 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fefe01a3b00 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fefe01a40b0 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fefd4004490 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.444+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fefd40050a0 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.445+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7fefd40052e0 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.446+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(44..44 src has 1..44) ==== 3905+0+0 (unknown 3682880759 0 0) 0x7fefd4095960 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.446+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fefa8005180 con 0x7fefe0102700 2026-03-10T13:37:42.450 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.450+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fefd405f210 con 0x7fefe0102700 2026-03-10T13:37:42.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.548+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7fefa8005470 con 0x7fefe0102700 2026-03-10T13:37:42.550 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.549+0000 7fefceffd640 1 -- 192.168.123.105:0/476207875 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v44) ==== 74+0+130 (unknown 987155921 0 4006695257) 0x7fefd4062ec0 con 0x7fefe0102700 2026-03-10T13:37:42.550 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:42.552 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 >> v1:192.168.123.105:6800/3845654103 conn(0x7fefb0089480 legacy=0x7fefb008b940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:42.552 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 >> v1:192.168.123.105:6790/0 conn(0x7fefe0102700 legacy=0x7fefe010d240 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:42.552 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 shutdown_connections 2026-03-10T13:37:42.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 >> 192.168.123.105:0/476207875 conn(0x7fefe00fde70 msgr2=0x7fefe01098d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:42.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 shutdown_connections 2026-03-10T13:37:42.553 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:42.552+0000 7fefe7fa8640 1 -- 192.168.123.105:0/476207875 wait complete. 2026-03-10T13:37:42.726 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":44,"num_osds":8,"num_up_osds":7,"osd_up_since":1773149853,"num_in_osds":8,"osd_in_since":1773149852,"num_remapped_pgs":0} 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: Detected new or changed devices on vm09 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: osdmap e44: 8 total, 7 up, 8 in 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/476207875' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: Detected new or changed devices on vm09 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: osdmap e44: 8 total, 7 up, 8 in 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/476207875' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: Detected new or changed devices on vm09 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: Adjusting osd_memory_target on vm09 to 65804k 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: Unable to set osd_memory_target on vm09 to 67384115: error parsing value: Value '67384115' is below minimum 939524096 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: osdmap e44: 8 total, 7 up, 8 in 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/476207875' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:43.173 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 13:37:42 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T13:37:42.807+0000 7fe1346e7640 -1 osd.7 0 waiting for initial osdmap 2026-03-10T13:37:43.173 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 13:37:42 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T13:37:42.816+0000 7fe12fcfe640 -1 osd.7 45 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T13:37:43.727 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd stat -f json 2026-03-10T13:37:43.903 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:44.019 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:44.019 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:44.019 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: osdmap e45: 8 total, 7 up, 8 in 2026-03-10T13:37:44.019 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.019 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: osd.7 v1:192.168.123.109:6812/3977889858 boot 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: osdmap e45: 8 total, 7 up, 8 in 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: osd.7 v1:192.168.123.109:6812/3977889858 boot 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T13:37:44.020 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:43 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.025 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.024+0000 7fec29972640 1 -- 192.168.123.105:0/1802937865 >> v1:192.168.123.105:6789/0 conn(0x7fec24108dc0 legacy=0x7fec2410b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.025+0000 7fec29972640 1 -- 192.168.123.105:0/1802937865 shutdown_connections 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.025+0000 7fec29972640 1 -- 192.168.123.105:0/1802937865 >> 192.168.123.105:0/1802937865 conn(0x7fec24100120 msgr2=0x7fec24102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.025+0000 7fec29972640 1 -- 192.168.123.105:0/1802937865 shutdown_connections 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.025+0000 7fec29972640 1 -- 192.168.123.105:0/1802937865 wait complete. 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec29972640 1 Processor -- start 2026-03-10T13:37:44.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec29972640 1 -- start start 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec29972640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec2419c830 con 0x7fec24104990 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec29972640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec241a7ff0 con 0x7fec24108dc0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec29972640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fec241a91d0 con 0x7fec2410cad0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec237fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fec2410cad0 0x7fec241a58c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:33618/0 (socket says 192.168.123.105:33618) 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.026+0000 7fec237fe640 1 -- 192.168.123.105:0/1723303680 learned_addr learned my addr 192.168.123.105:0/1723303680 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3577243530 0 0) 0x7fec241a7ff0 con 0x7fec24108dc0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7febf8003620 con 0x7fec24108dc0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2669702401 0 0) 0x7fec241a91d0 con 0x7fec2410cad0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fec241a7ff0 con 0x7fec2410cad0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1207024119 0 0) 0x7fec241a7ff0 con 0x7fec2410cad0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fec241a91d0 con 0x7fec2410cad0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2059178622 0 0) 0x7febf8003620 con 0x7fec24108dc0 2026-03-10T13:37:44.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fec241a7ff0 con 0x7fec24108dc0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fec14003440 con 0x7fec2410cad0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fec0c0044e0 con 0x7fec24108dc0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3378556652 0 0) 0x7fec241a91d0 con 0x7fec2410cad0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 >> v1:192.168.123.109:6789/0 conn(0x7fec24108dc0 legacy=0x7fec241a2190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.027+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 >> v1:192.168.123.105:6789/0 conn(0x7fec24104990 legacy=0x7fec2419bcb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.028+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fec241aa3b0 con 0x7fec2410cad0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.028+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fec14003ee0 con 0x7fec2410cad0 2026-03-10T13:37:44.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.028+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fec14005f20 con 0x7fec2410cad0 2026-03-10T13:37:44.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.028+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fec241a9400 con 0x7fec2410cad0 2026-03-10T13:37:44.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.029+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fec241a9990 con 0x7fec2410cad0 2026-03-10T13:37:44.030 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.030+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7fec14003a30 con 0x7fec2410cad0 2026-03-10T13:37:44.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.030+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fec241109c0 con 0x7fec2410cad0 2026-03-10T13:37:44.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.030+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(46..46 src has 1..46) ==== 4094+0+0 (unknown 1877003870 0 0) 0x7fec14093960 con 0x7fec2410cad0 2026-03-10T13:37:44.033 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.032+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fec1405e190 con 0x7fec2410cad0 2026-03-10T13:37:44.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.126+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7fec241a9ea0 con 0x7fec2410cad0 2026-03-10T13:37:44.127 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.127+0000 7fec28970640 1 -- 192.168.123.105:0/1723303680 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v46) ==== 74+0+130 (unknown 3270801518 0 2926808767) 0x7fec14061e40 con 0x7fec2410cad0 2026-03-10T13:37:44.127 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:44.129 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.129+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 >> v1:192.168.123.105:6800/3845654103 conn(0x7febf8078010 legacy=0x7febf807a4d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.129+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 >> v1:192.168.123.105:6790/0 conn(0x7fec2410cad0 legacy=0x7fec241a58c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.129+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 shutdown_connections 2026-03-10T13:37:44.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.129+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 >> 192.168.123.105:0/1723303680 conn(0x7fec24100120 msgr2=0x7fec241091e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:44.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.130+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 shutdown_connections 2026-03-10T13:37:44.130 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.130+0000 7fec29972640 1 -- 192.168.123.105:0/1723303680 wait complete. 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: from='osd.7 v1:192.168.123.109:6812/3977889858' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: osdmap e45: 8 total, 7 up, 8 in 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: osd.7 v1:192.168.123.109:6812/3977889858 boot 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: osdmap e46: 8 total, 8 up, 8 in 2026-03-10T13:37:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:43 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:37:44.296 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":46,"num_osds":8,"num_up_osds":8,"osd_up_since":1773149863,"num_in_osds":8,"osd_in_since":1773149852,"num_remapped_pgs":1} 2026-03-10T13:37:44.297 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd dump --format=json 2026-03-10T13:37:44.468 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.593+0000 7f15f7d02640 1 -- 192.168.123.105:0/4048391463 >> v1:192.168.123.105:6790/0 conn(0x7f15f010a910 legacy=0x7f15f010acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- 192.168.123.105:0/4048391463 shutdown_connections 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- 192.168.123.105:0/4048391463 >> 192.168.123.105:0/4048391463 conn(0x7f15f01005f0 msgr2=0x7f15f0102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- 192.168.123.105:0/4048391463 shutdown_connections 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- 192.168.123.105:0/4048391463 wait complete. 2026-03-10T13:37:44.594 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 Processor -- start 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- start start 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f15f01ab720 con 0x7f15f0111360 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f15f01ac920 con 0x7f15f010a910 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.594+0000 7f15f7d02640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f15f01adb20 con 0x7f15f010d7c0 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15f5a77640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f15f010a910 0x7f15f01109a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:59402/0 (socket says 192.168.123.105:59402) 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15f5a77640 1 -- 192.168.123.105:0/1464695060 learned_addr learned my addr 192.168.123.105:0/1464695060 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3121283682 0 0) 0x7f15f01ac920 con 0x7f15f010a910 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f15bc003620 con 0x7f15f010a910 2026-03-10T13:37:44.595 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 57035801 0 0) 0x7f15bc003620 con 0x7f15f010a910 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f15f01ac920 con 0x7f15f010a910 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f15d80030e0 con 0x7f15f010a910 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3237868534 0 0) 0x7f15f01ac920 con 0x7f15f010a910 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.595+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 >> v1:192.168.123.105:6790/0 conn(0x7f15f010d7c0 legacy=0x7f15f01a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 >> v1:192.168.123.105:6789/0 conn(0x7f15f0111360 legacy=0x7f15f01a9e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f15f01aed20 con 0x7f15f010a910 2026-03-10T13:37:44.596 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f15f01ab950 con 0x7f15f010a910 2026-03-10T13:37:44.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f15f01abf00 con 0x7f15f010a910 2026-03-10T13:37:44.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f15d8003490 con 0x7f15f010a910 2026-03-10T13:37:44.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.596+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f15d8004e70 con 0x7f15f010a910 2026-03-10T13:37:44.597 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.597+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f15d8005030 con 0x7f15f010a910 2026-03-10T13:37:44.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.598+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f15c0005180 con 0x7f15f010a910 2026-03-10T13:37:44.598 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.598+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(46..46 src has 1..46) ==== 4094+0+0 (unknown 1877003870 0 0) 0x7f15d8094a10 con 0x7f15f010a910 2026-03-10T13:37:44.602 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.601+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f15d8066e70 con 0x7f15f010a910 2026-03-10T13:37:44.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.694+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f15c0005470 con 0x7f15f010a910 2026-03-10T13:37:44.695 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.695+0000 7f15e6ffd640 1 -- 192.168.123.105:0/1464695060 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v46) ==== 74+0+11703 (unknown 2981296451 0 25189338) 0x7f15d805abf0 con 0x7f15f010a910 2026-03-10T13:37:44.696 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:44.696 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":46,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","created":"2026-03-10T13:35:24.006116+0000","modified":"2026-03-10T13:37:43.799387+0000","last_up_change":"2026-03-10T13:37:43.799387+0000","last_in_change":"2026-03-10T13:37:32.453876+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:36:50.580400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"27418eee-abb2-4d75-aadf-ed68d081290c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":45,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":3141950523}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":3141950523}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":3141950523}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":3141950523}]},"public_addr":"192.168.123.105:6801/3141950523","cluster_addr":"192.168.123.105:6802/3141950523","heartbeat_back_addr":"192.168.123.105:6804/3141950523","heartbeat_front_addr":"192.168.123.105:6803/3141950523","state":["exists","up"]},{"osd":1,"uuid":"f512d6be-c3f7-4742-a120-ab1907d08ac3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":1936282018}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":1936282018}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":1936282018}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":1936282018}]},"public_addr":"192.168.123.105:6805/1936282018","cluster_addr":"192.168.123.105:6806/1936282018","heartbeat_back_addr":"192.168.123.105:6808/1936282018","heartbeat_front_addr":"192.168.123.105:6807/1936282018","state":["exists","up"]},{"osd":2,"uuid":"a686b53f-59af-40c9-a5d6-bde07754c934","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":3999426341}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":3999426341}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":3999426341}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":3999426341}]},"public_addr":"192.168.123.105:6809/3999426341","cluster_addr":"192.168.123.105:6810/3999426341","heartbeat_back_addr":"192.168.123.105:6812/3999426341","heartbeat_front_addr":"192.168.123.105:6811/3999426341","state":["exists","up"]},{"osd":3,"uuid":"e9aa7ce5-7d1a-4946-9551-10bfc47bd58b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":693788844}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":693788844}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":693788844}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":693788844}]},"public_addr":"192.168.123.105:6813/693788844","cluster_addr":"192.168.123.105:6814/693788844","heartbeat_back_addr":"192.168.123.105:6816/693788844","heartbeat_front_addr":"192.168.123.105:6815/693788844","state":["exists","up"]},{"osd":4,"uuid":"72d4c584-8c2a-4a71-a3f3-b3a23f142206","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":3898346219}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":3898346219}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":3898346219}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":3898346219}]},"public_addr":"192.168.123.109:6800/3898346219","cluster_addr":"192.168.123.109:6801/3898346219","heartbeat_back_addr":"192.168.123.109:6803/3898346219","heartbeat_front_addr":"192.168.123.109:6802/3898346219","state":["exists","up"]},{"osd":5,"uuid":"dba319a5-a2e5-417f-b334-ac4bdbd6a2aa","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":36,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":452558008}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":452558008}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":452558008}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":452558008}]},"public_addr":"192.168.123.109:6804/452558008","cluster_addr":"192.168.123.109:6805/452558008","heartbeat_back_addr":"192.168.123.109:6807/452558008","heartbeat_front_addr":"192.168.123.109:6806/452558008","state":["exists","up"]},{"osd":6,"uuid":"afe42148-806c-4ff6-9729-634661c10d48","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":354656606}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":354656606}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":354656606}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":354656606}]},"public_addr":"192.168.123.109:6808/354656606","cluster_addr":"192.168.123.109:6809/354656606","heartbeat_back_addr":"192.168.123.109:6811/354656606","heartbeat_front_addr":"192.168.123.109:6810/354656606","state":["exists","up"]},{"osd":7,"uuid":"902bce05-1aee-4630-a57d-74b141285652","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":3977889858}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":3977889858}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":3977889858}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":3977889858}]},"public_addr":"192.168.123.109:6812/3977889858","cluster_addr":"192.168.123.109:6813/3977889858","heartbeat_back_addr":"192.168.123.109:6815/3977889858","heartbeat_front_addr":"192.168.123.109:6814/3977889858","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:24.966724+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:36.004110+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:46.557144+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:58.013466+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:08.845599+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:19.063608+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:30.636274+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[{"pgid":"1.0","osds":[0,6,1]}],"primary_temp":[],"blocklist":{"192.168.123.105:0/3712072921":"2026-03-11T13:35:46.531042+0000","192.168.123.105:6800/3334108074":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/2337127528":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3792932241":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1473752177":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3043025705":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1619687969":"2026-03-11T13:35:35.615869+0000","192.168.123.105:6800/1920070151":"2026-03-11T13:35:35.615869+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.697+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 >> v1:192.168.123.105:6800/3845654103 conn(0x7f15bc0780e0 legacy=0x7f15bc07a5a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.697+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 >> v1:192.168.123.109:6789/0 conn(0x7f15f010a910 legacy=0x7f15f01109a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.698+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 shutdown_connections 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.698+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 >> 192.168.123.105:0/1464695060 conn(0x7f15f01005f0 msgr2=0x7f15f01034c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.698+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 shutdown_connections 2026-03-10T13:37:44.698 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:44.698+0000 7f15f7d02640 1 -- 192.168.123.105:0/1464695060 wait complete. 2026-03-10T13:37:45.240 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T13:36:50.580400+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T13:37:45.240 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd pool get .mgr pg_num 2026-03-10T13:37:45.413 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:45 vm09 ceph-mon[53367]: purged_snaps scrub starts 2026-03-10T13:37:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:45 vm09 ceph-mon[53367]: purged_snaps scrub ok 2026-03-10T13:37:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723303680' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:45.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1464695060' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[51512]: purged_snaps scrub starts 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[51512]: purged_snaps scrub ok 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723303680' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1464695060' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[58955]: purged_snaps scrub starts 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[58955]: purged_snaps scrub ok 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723303680' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:37:45.436 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1464695060' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:45.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.536+0000 7f3bf05a4640 1 -- 192.168.123.105:0/1713071417 >> v1:192.168.123.105:6789/0 conn(0x7f3be8111390 legacy=0x7f3be8113850 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:45.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.536+0000 7f3bf05a4640 1 -- 192.168.123.105:0/1713071417 shutdown_connections 2026-03-10T13:37:45.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.536+0000 7f3bf05a4640 1 -- 192.168.123.105:0/1713071417 >> 192.168.123.105:0/1713071417 conn(0x7f3be8100620 msgr2=0x7f3be8102a40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:45.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.537+0000 7f3bf05a4640 1 -- 192.168.123.105:0/1713071417 shutdown_connections 2026-03-10T13:37:45.537 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.537+0000 7f3bf05a4640 1 -- 192.168.123.105:0/1713071417 wait complete. 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.537+0000 7f3bf05a4640 1 Processor -- start 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.537+0000 7f3bf05a4640 1 -- start start 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.538+0000 7f3bf05a4640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3be81ab7e0 con 0x7f3be810a940 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.538+0000 7f3bf05a4640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3be81ac9e0 con 0x7f3be810d7f0 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.538+0000 7f3bf05a4640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f3be81adbe0 con 0x7f3be8111390 2026-03-10T13:37:45.538 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.538+0000 7f3bee319640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f3be810a940 0x7f3be8110a70 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:34340/0 (socket says 192.168.123.105:34340) 2026-03-10T13:37:45.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.538+0000 7f3bee319640 1 -- 192.168.123.105:0/856558337 learned_addr learned my addr 192.168.123.105:0/856558337 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:45.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3346558415 0 0) 0x7f3be81ab7e0 con 0x7f3be810a940 2026-03-10T13:37:45.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3bbc003620 con 0x7f3be810a940 2026-03-10T13:37:45.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2350421108 0 0) 0x7f3bbc003620 con 0x7f3be810a940 2026-03-10T13:37:45.539 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f3be81ab7e0 con 0x7f3be810a940 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3024594902 0 0) 0x7f3be81ac9e0 con 0x7f3be810d7f0 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3bbc003620 con 0x7f3be810d7f0 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3117539354 0 0) 0x7f3be81adbe0 con 0x7f3be8111390 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f3be81ac9e0 con 0x7f3be8111390 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.539+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f3bdc003150 con 0x7f3be810a940 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1132803146 0 0) 0x7f3be81ab7e0 con 0x7f3be810a940 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 >> v1:192.168.123.105:6790/0 conn(0x7f3be8111390 legacy=0x7f3be81a9ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:45.540 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 >> v1:192.168.123.109:6789/0 conn(0x7f3be810d7f0 legacy=0x7f3be81a6680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:45.541 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3be81aede0 con 0x7f3be810a940 2026-03-10T13:37:45.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f3be81acc10 con 0x7f3be810a940 2026-03-10T13:37:45.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.540+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f3be81ad1c0 con 0x7f3be810a940 2026-03-10T13:37:45.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.541+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f3bdc0032f0 con 0x7f3be810a940 2026-03-10T13:37:45.542 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.542+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f3bdc005c40 con 0x7f3be810a940 2026-03-10T13:37:45.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.542+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3bb0005180 con 0x7f3be810a940 2026-03-10T13:37:45.545 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.543+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f3bdc006f10 con 0x7f3be810a940 2026-03-10T13:37:45.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.543+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(47..47 src has 1..47) ==== 4061+0+0 (unknown 2827185547 0 0) 0x7f3bdc095620 con 0x7f3be810a940 2026-03-10T13:37:45.546 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.545+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f3bdc05ef60 con 0x7f3be810a940 2026-03-10T13:37:45.641 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.640+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f3bb0005d40 con 0x7f3be810a940 2026-03-10T13:37:45.641 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.641+0000 7f3bd77fe640 1 -- 192.168.123.105:0/856558337 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v47) ==== 93+0+10 (unknown 1713183659 0 2170607528) 0x7f3bdc062c10 con 0x7f3be810a940 2026-03-10T13:37:45.642 INFO:teuthology.orchestra.run.vm05.stdout:pg_num: 1 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.642+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 >> v1:192.168.123.105:6800/3845654103 conn(0x7f3bbc078330 legacy=0x7f3bbc07a7f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.642+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 >> v1:192.168.123.105:6789/0 conn(0x7f3be810a940 legacy=0x7f3be8110a70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.643+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 shutdown_connections 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.643+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 >> 192.168.123.105:0/856558337 conn(0x7f3be8100620 msgr2=0x7f3be8103610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.643+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 shutdown_connections 2026-03-10T13:37:45.643 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:45.643+0000 7f3bf05a4640 1 -- 192.168.123.105:0/856558337 wait complete. 2026-03-10T13:37:45.791 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm05 2026-03-10T13:37:45.791 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply rgw foo.a --placement '1;vm05=foo.a' 2026-03-10T13:37:45.957 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:46.094 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.093+0000 7feb922b9640 1 -- 192.168.123.109:0/2808624827 >> v1:192.168.123.109:6789/0 conn(0x7feb8c1049b0 legacy=0x7feb8c104db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.094 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.094+0000 7feb922b9640 1 -- 192.168.123.109:0/2808624827 shutdown_connections 2026-03-10T13:37:46.094 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.094+0000 7feb922b9640 1 -- 192.168.123.109:0/2808624827 >> 192.168.123.109:0/2808624827 conn(0x7feb8c100120 msgr2=0x7feb8c102580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:46.094 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.095+0000 7feb922b9640 1 -- 192.168.123.109:0/2808624827 shutdown_connections 2026-03-10T13:37:46.095 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.095+0000 7feb922b9640 1 -- 192.168.123.109:0/2808624827 wait complete. 2026-03-10T13:37:46.095 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.095+0000 7feb922b9640 1 Processor -- start 2026-03-10T13:37:46.095 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.095+0000 7feb922b9640 1 -- start start 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb922b9640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb8c19ca50 con 0x7feb8c108de0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb922b9640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb8c1a8210 con 0x7feb8c1049b0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb922b9640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7feb8c1a93f0 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb9082f640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7feb8c10caf0 0x7feb8c1a5ae0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:53168/0 (socket says 192.168.123.109:53168) 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb9082f640 1 -- 192.168.123.109:0/3045277553 learned_addr learned my addr 192.168.123.109:0/3045277553 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 432013707 0 0) 0x7feb8c1a93f0 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7feb60003620 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1914162777 0 0) 0x7feb60003620 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7feb8c1a93f0 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.096+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7feb80003500 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3628519385 0 0) 0x7feb8c1a93f0 con 0x7feb8c10caf0 2026-03-10T13:37:46.096 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 >> v1:192.168.123.109:6789/0 conn(0x7feb8c1049b0 legacy=0x7feb8c19bed0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:37:46.097 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 >> v1:192.168.123.105:6789/0 conn(0x7feb8c108de0 legacy=0x7feb8c1a23b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.097 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feb8c1aa5d0 con 0x7feb8c10caf0 2026-03-10T13:37:46.097 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7feb80003840 con 0x7feb8c10caf0 2026-03-10T13:37:46.098 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7feb80006130 con 0x7feb8c10caf0 2026-03-10T13:37:46.098 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7feb8c1a7260 con 0x7feb8c10caf0 2026-03-10T13:37:46.098 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.097+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7feb8c1a7840 con 0x7feb8c10caf0 2026-03-10T13:37:46.098 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.098+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7feb50005180 con 0x7feb8c10caf0 2026-03-10T13:37:46.099 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.099+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7feb80003d40 con 0x7feb8c10caf0 2026-03-10T13:37:46.100 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.100+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(47..47 src has 1..47) ==== 4061+0+0 (unknown 2827185547 0 0) 0x7feb80094c70 con 0x7feb8c10caf0 2026-03-10T13:37:46.101 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.101+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7feb8005e490 con 0x7feb8c10caf0 2026-03-10T13:37:46.202 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.202+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}) -- 0x7feb50002bf0 con 0x7feb600780a0 2026-03-10T13:37:46.207 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.208+0000 7feb897fa640 1 -- 192.168.123.109:0/3045277553 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (unknown 0 0 1123153589) 0x7feb50002bf0 con 0x7feb600780a0 2026-03-10T13:37:46.208 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled rgw.foo.a update... 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 >> v1:192.168.123.105:6800/3845654103 conn(0x7feb600780a0 legacy=0x7feb6007a560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 >> v1:192.168.123.105:6790/0 conn(0x7feb8c10caf0 legacy=0x7feb8c1a5ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 shutdown_connections 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 >> 192.168.123.109:0/3045277553 conn(0x7feb8c100120 msgr2=0x7feb8c10b230 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 shutdown_connections 2026-03-10T13:37:46.210 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.210+0000 7feb922b9640 1 -- 192.168.123.109:0/3045277553 wait complete. 2026-03-10T13:37:46.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:46 vm09 ceph-mon[53367]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:46.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:46 vm09 ceph-mon[53367]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T13:37:46.335 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/856558337' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:37:46.360 DEBUG:teuthology.orchestra.run.vm05:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@rgw.foo.a.service 2026-03-10T13:37:46.362 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm09 2026-03-10T13:37:46.362 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd pool create datapool 3 3 replicated 2026-03-10T13:37:46.445 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[58955]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:46.445 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[58955]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T13:37:46.445 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/856558337' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:37:46.446 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[51512]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-10T13:37:46.446 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[51512]: osdmap e47: 8 total, 8 up, 8 in 2026-03-10T13:37:46.446 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/856558337' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:37:46.537 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:46.669 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.669+0000 7f025090b640 1 -- 192.168.123.109:0/534254250 >> v1:192.168.123.109:6789/0 conn(0x7f0248104990 legacy=0x7f0248104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.669 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.669+0000 7f025090b640 1 -- 192.168.123.109:0/534254250 shutdown_connections 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.669+0000 7f025090b640 1 -- 192.168.123.109:0/534254250 >> 192.168.123.109:0/534254250 conn(0x7f0248100120 msgr2=0x7f0248102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.669+0000 7f025090b640 1 -- 192.168.123.109:0/534254250 shutdown_connections 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 -- 192.168.123.109:0/534254250 wait complete. 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 Processor -- start 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 -- start start 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f024819ca90 con 0x7f0248108dc0 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f02481a8250 con 0x7f0248104990 2026-03-10T13:37:46.670 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f025090b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f02481a9430 con 0x7f024810cad0 2026-03-10T13:37:46.671 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f024e680640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f0248104990 0x7f024819bf10 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:43088/0 (socket says 192.168.123.109:43088) 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.670+0000 7f024e680640 1 -- 192.168.123.109:0/1895039743 learned_addr learned my addr 192.168.123.109:0/1895039743 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1140699317 0 0) 0x7f02481a8250 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0214003620 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 814785551 0 0) 0x7f0214003620 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f02481a8250 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0234002f60 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2652786366 0 0) 0x7f02481a8250 con 0x7f0248104990 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 >> v1:192.168.123.105:6790/0 conn(0x7f024810cad0 legacy=0x7f02481a5b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.672 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 >> v1:192.168.123.105:6789/0 conn(0x7f0248108dc0 legacy=0x7f02481a23f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f02481aa610 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f02481a9660 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f02481a9c40 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0234003f50 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.671+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0234005ef0 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.672+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f0234003b00 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.673+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(48..48 src has 1..48) ==== 4061+0+0 (unknown 482195197 0 0) 0x7f0234094a50 con 0x7f0248104990 2026-03-10T13:37:46.673 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.673+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0218005180 con 0x7f0248104990 2026-03-10T13:37:46.676 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.676+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f023405e270 con 0x7f0248104990 2026-03-10T13:37:46.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:46.769+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7f0218005470 con 0x7f0248104990 2026-03-10T13:37:47.061 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:46 vm05 systemd[1]: Starting Ceph rgw.foo.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:37:47.229 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.229+0000 7f02337fe640 1 -- 192.168.123.109:0/1895039743 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v49) ==== 160+0+0 (unknown 2280981637 0 0) 0x7f0234061f20 con 0x7f0248104990 2026-03-10T13:37:47.230 INFO:teuthology.orchestra.run.vm09.stderr:pool 'datapool' created 2026-03-10T13:37:47.233 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 >> v1:192.168.123.105:6800/3845654103 conn(0x7f0214078090 legacy=0x7f021407a550 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:47.233 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 >> v1:192.168.123.109:6789/0 conn(0x7f0248104990 legacy=0x7f024819bf10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:47.238 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 shutdown_connections 2026-03-10T13:37:47.238 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 >> 192.168.123.109:0/1895039743 conn(0x7f0248100120 msgr2=0x7f024810b210 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:47.238 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 shutdown_connections 2026-03-10T13:37:47.238 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.233+0000 7f025090b640 1 -- 192.168.123.109:0/1895039743 wait complete. 2026-03-10T13:37:47.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='client.24347 v1:192.168.123.109:0/3045277553' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: Deploying daemon rgw.foo.a on vm05 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/1895039743' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[51512]: from='client.24346 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='client.24347 v1:192.168.123.109:0/3045277553' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: Deploying daemon rgw.foo.a on vm05 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/1895039743' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:47 vm05 ceph-mon[58955]: from='client.24346 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 podman[83410]: 2026-03-10 13:37:47.061563616 +0000 UTC m=+0.015924352 container create d76827d6a8cd52535e652e244fbeb2ed520e35a9f62d6c1d949ceba0b2e6e249 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 podman[83410]: 2026-03-10 13:37:47.107464714 +0000 UTC m=+0.061825439 container init d76827d6a8cd52535e652e244fbeb2ed520e35a9f62d6c1d949ceba0b2e6e249 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 podman[83410]: 2026-03-10 13:37:47.112156557 +0000 UTC m=+0.066517293 container start d76827d6a8cd52535e652e244fbeb2ed520e35a9f62d6c1d949ceba0b2e6e249 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, OSD_FLAVOR=default) 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 bash[83410]: d76827d6a8cd52535e652e244fbeb2ed520e35a9f62d6c1d949ceba0b2e6e249 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 podman[83410]: 2026-03-10 13:37:47.055218529 +0000 UTC m=+0.009579275 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:37:47.332 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:47 vm05 systemd[1]: Started Ceph rgw.foo.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:37:47.410 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- rbd pool init datapool 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: osdmap e48: 8 total, 8 up, 8 in 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='client.24347 v1:192.168.123.109:0/3045277553' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm05=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: Deploying daemon rgw.foo.a on vm05 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/1895039743' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:47 vm09 ceph-mon[53367]: from='client.24346 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T13:37:47.581 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:47.683 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:47.683+0000 7f7f9eab1640 1 -- 192.168.123.109:0/2236533034 <== mon.1 v1:192.168.123.109:6789/0 5 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7f88002f20 con 0x560282872020 2026-03-10T13:37:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='client.24346 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/263256106' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='client.24356 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='client.24361 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/2527439427' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:48 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:48.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='client.24346 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/263256106' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='client.24356 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='client.24361 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/2527439427' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: Saving service rgw.foo.a spec with placement vm05=foo.a;count:1 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='client.24346 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: osdmap e49: 8 total, 8 up, 8 in 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/263256106' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='client.24356 ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='client.24361 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/2527439427' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:48.587 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:48 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[58955]: Checking dashboard <-> RGW credentials 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[58955]: from='client.24356 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[58955]: from='client.24361 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[58955]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[51512]: Checking dashboard <-> RGW credentials 2026-03-10T13:37:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[51512]: from='client.24356 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T13:37:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[51512]: from='client.24361 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T13:37:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[51512]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T13:37:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:49 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:49 vm09 ceph-mon[53367]: Checking dashboard <-> RGW credentials 2026-03-10T13:37:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:49 vm09 ceph-mon[53367]: from='client.24356 ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T13:37:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:49 vm09 ceph-mon[53367]: from='client.24361 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T13:37:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:49 vm09 ceph-mon[53367]: osdmap e50: 8 total, 8 up, 8 in 2026-03-10T13:37:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:49 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:50.423 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.109 --placement '1;vm09=iscsi.a' 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[58955]: pgmap v101: 36 pgs: 3 creating+peering, 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[58955]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[51512]: pgmap v101: 36 pgs: 3 creating+peering, 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[51512]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:50 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.634 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:50.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:50 vm09 ceph-mon[53367]: pgmap v101: 36 pgs: 3 creating+peering, 32 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:37:50.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:50 vm09 ceph-mon[53367]: osdmap e51: 8 total, 8 up, 8 in 2026-03-10T13:37:50.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:50 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T13:37:50.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.767+0000 7f06abd5a640 1 -- 192.168.123.109:0/1493501674 >> v1:192.168.123.109:6789/0 conn(0x7f06a407bdc0 legacy=0x7f06a407e280 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:50.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.767+0000 7f06abd5a640 1 -- 192.168.123.109:0/1493501674 shutdown_connections 2026-03-10T13:37:50.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.767+0000 7f06abd5a640 1 -- 192.168.123.109:0/1493501674 >> 192.168.123.109:0/1493501674 conn(0x7f06a406f580 msgr2=0x7f06a40719e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:50.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.767+0000 7f06abd5a640 1 -- 192.168.123.109:0/1493501674 shutdown_connections 2026-03-10T13:37:50.767 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.767+0000 7f06abd5a640 1 -- 192.168.123.109:0/1493501674 wait complete. 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06abd5a640 1 Processor -- start 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06abd5a640 1 -- start start 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06abd5a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f06a41b4740 con 0x7f06a407bdc0 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06abd5a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f06a41bfbc0 con 0x7f06a4073b20 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06abd5a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f06a41c0da0 con 0x7f06a4078220 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.768+0000 7f06aa2d0640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f06a407bdc0 0x7f06a41bd490 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:34920/0 (socket says 192.168.123.109:34920) 2026-03-10T13:37:50.768 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f06aa2d0640 1 -- 192.168.123.109:0/2298625354 learned_addr learned my addr 192.168.123.109:0/2298625354 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3266013512 0 0) 0x7f06a41b4740 con 0x7f06a407bdc0 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0680003620 con 0x7f06a407bdc0 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3190598529 0 0) 0x7f06a41bfbc0 con 0x7f06a4073b20 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f06a41b4740 con 0x7f06a4073b20 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4204425901 0 0) 0x7f06a41c0da0 con 0x7f06a4078220 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f06a41bfbc0 con 0x7f06a4078220 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4260821140 0 0) 0x7f06a41b4740 con 0x7f06a4073b20 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f06a41c0da0 con 0x7f06a4073b20 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2692910553 0 0) 0x7f0680003620 con 0x7f06a407bdc0 2026-03-10T13:37:50.769 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f06a41b4740 con 0x7f06a407bdc0 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f06980030c0 con 0x7f06a4073b20 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.769+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f069c004850 con 0x7f06a407bdc0 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1857958499 0 0) 0x7f06a41bfbc0 con 0x7f06a4078220 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0680003620 con 0x7f06a4078220 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2784319260 0 0) 0x7f06a41b4740 con 0x7f06a407bdc0 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 >> v1:192.168.123.105:6790/0 conn(0x7f06a4078220 legacy=0x7f06a41b9d60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 >> v1:192.168.123.109:6789/0 conn(0x7f06a4073b20 legacy=0x7f06a41b3ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f06a41c1f80 con 0x7f06a407bdc0 2026-03-10T13:37:50.770 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f06a41c0fd0 con 0x7f06a407bdc0 2026-03-10T13:37:50.771 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.770+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f06a41c1600 con 0x7f06a407bdc0 2026-03-10T13:37:50.771 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.771+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f066c005180 con 0x7f06a407bdc0 2026-03-10T13:37:50.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.772+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f069c003740 con 0x7f06a407bdc0 2026-03-10T13:37:50.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.772+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f069c004fe0 con 0x7f06a407bdc0 2026-03-10T13:37:50.772 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.772+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f069c005260 con 0x7f06a407bdc0 2026-03-10T13:37:50.773 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.773+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(52..52 src has 1..52) ==== 5199+0+0 (unknown 1842722428 0 0) 0x7f069c05aad0 con 0x7f06a407bdc0 2026-03-10T13:37:50.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.775+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f069c05eec0 con 0x7f06a407bdc0 2026-03-10T13:37:50.872 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.872+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7f066c002cc0 con 0x7f068007c960 2026-03-10T13:37:50.881 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.881+0000 7f0696ffd640 1 -- 192.168.123.109:0/2298625354 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (unknown 0 0 803663096) 0x7f066c002cc0 con 0x7f068007c960 2026-03-10T13:37:50.881 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled iscsi.datapool update... 2026-03-10T13:37:50.883 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 >> v1:192.168.123.105:6800/3845654103 conn(0x7f068007c960 legacy=0x7f068007ee20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:50.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 >> v1:192.168.123.105:6789/0 conn(0x7f06a407bdc0 legacy=0x7f06a41bd490 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:50.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 shutdown_connections 2026-03-10T13:37:50.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 >> 192.168.123.109:0/2298625354 conn(0x7f06a406f580 msgr2=0x7f06a4077960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:50.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 shutdown_connections 2026-03-10T13:37:50.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:50.884+0000 7f06abd5a640 1 -- 192.168.123.109:0/2298625354 wait complete. 2026-03-10T13:37:51.054 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-10T13:37:51.054 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:37:51.054 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T13:37:51.082 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:37:51.082 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T13:37:51.108 DEBUG:teuthology.orchestra.run.vm09:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@iscsi.iscsi.a.service 2026-03-10T13:37:51.150 INFO:tasks.cephadm:Adding prometheus.a on vm09 2026-03-10T13:37:51.150 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply prometheus '1;vm09=a' 2026-03-10T13:37:51.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:51 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:51 vm09 ceph-mon[53367]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T13:37:51.275 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:51 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[58955]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[51512]: osdmap e52: 8 total, 8 up, 8 in 2026-03-10T13:37:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:51 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:51.385 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.519+0000 7f70f7de6640 1 -- 192.168.123.109:0/1604062142 >> v1:192.168.123.109:6789/0 conn(0x7f70f0104990 legacy=0x7f70f0104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 -- 192.168.123.109:0/1604062142 shutdown_connections 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 -- 192.168.123.109:0/1604062142 >> 192.168.123.109:0/1604062142 conn(0x7f70f0100120 msgr2=0x7f70f0102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 -- 192.168.123.109:0/1604062142 shutdown_connections 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 -- 192.168.123.109:0/1604062142 wait complete. 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 Processor -- start 2026-03-10T13:37:51.520 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.520+0000 7f70f7de6640 1 -- start start 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70f7de6640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f70f019c950 con 0x7f70f010cad0 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70f7de6640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f70f01a8110 con 0x7f70f0108dc0 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70f7de6640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f70f01a92f0 con 0x7f70f0104990 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70f635c640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f70f010cad0 0x7f70f01a59e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:34942/0 (socket says 192.168.123.109:34942) 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70f635c640 1 -- 192.168.123.109:0/2395635039 learned_addr learned my addr 192.168.123.109:0/2395635039 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1290106603 0 0) 0x7f70f019c950 con 0x7f70f010cad0 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f70cc003620 con 0x7f70f010cad0 2026-03-10T13:37:51.521 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2314935293 0 0) 0x7f70f01a92f0 con 0x7f70f0104990 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.521+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f70f019c950 con 0x7f70f0104990 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1429763531 0 0) 0x7f70cc003620 con 0x7f70f010cad0 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f70f01a92f0 con 0x7f70f010cad0 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f70dc003100 con 0x7f70f010cad0 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 962164858 0 0) 0x7f70f019c950 con 0x7f70f0104990 2026-03-10T13:37:51.522 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f70cc003620 con 0x7f70f0104990 2026-03-10T13:37:51.523 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f70e4002f40 con 0x7f70f0104990 2026-03-10T13:37:51.523 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 326283749 0 0) 0x7f70f01a92f0 con 0x7f70f010cad0 2026-03-10T13:37:51.523 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 >> v1:192.168.123.105:6790/0 conn(0x7f70f0104990 legacy=0x7f70f019bdd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 >> v1:192.168.123.109:6789/0 conn(0x7f70f0108dc0 legacy=0x7f70f01a22b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f70f01aa4d0 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f70f01a7160 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f70f01a7740 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f70dc0034c0 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.522+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f70dc006c90 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.524+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f70dc006e30 con 0x7f70f010cad0 2026-03-10T13:37:51.524 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.524+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(53..53 src has 1..53) ==== 5529+0+0 (unknown 3242474439 0 0) 0x7f70dc0951c0 con 0x7f70f010cad0 2026-03-10T13:37:51.526 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.525+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f70c4005180 con 0x7f70f010cad0 2026-03-10T13:37:51.528 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.528+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f70dc05e420 con 0x7f70f010cad0 2026-03-10T13:37:51.626 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.625+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7f70c4002bf0 con 0x7f70cc078250 2026-03-10T13:37:51.632 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled prometheus update... 2026-03-10T13:37:51.632 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.631+0000 7f70e2ffd640 1 -- 192.168.123.109:0/2395635039 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (unknown 0 0 1342662408) 0x7f70c4002bf0 con 0x7f70cc078250 2026-03-10T13:37:51.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.634+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 >> v1:192.168.123.105:6800/3845654103 conn(0x7f70cc078250 legacy=0x7f70cc07a710 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:51.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.634+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 >> v1:192.168.123.105:6789/0 conn(0x7f70f010cad0 legacy=0x7f70f01a59e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:51.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.634+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 shutdown_connections 2026-03-10T13:37:51.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.634+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 >> 192.168.123.109:0/2395635039 conn(0x7f70f0100120 msgr2=0x7f70f0109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:51.634 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.635+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 shutdown_connections 2026-03-10T13:37:51.635 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:51.635+0000 7f70f7de6640 1 -- 192.168.123.109:0/2395635039 wait complete. 2026-03-10T13:37:51.789 DEBUG:teuthology.orchestra.run.vm09:prometheus.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@prometheus.a.service 2026-03-10T13:37:51.790 INFO:tasks.cephadm:Adding node-exporter.a on vm05 2026-03-10T13:37:51.790 INFO:tasks.cephadm:Adding node-exporter.b on vm09 2026-03-10T13:37:51.790 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply node-exporter '2;vm05=a;vm09=b' 2026-03-10T13:37:51.995 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.120+0000 7f173adae640 1 -- 192.168.123.109:0/569514469 >> v1:192.168.123.109:6789/0 conn(0x7f1734104990 legacy=0x7f1734104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.121+0000 7f173adae640 1 -- 192.168.123.109:0/569514469 shutdown_connections 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.121+0000 7f173adae640 1 -- 192.168.123.109:0/569514469 >> 192.168.123.109:0/569514469 conn(0x7f1734100120 msgr2=0x7f1734102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.121+0000 7f173adae640 1 -- 192.168.123.109:0/569514469 shutdown_connections 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.121+0000 7f173adae640 1 -- 192.168.123.109:0/569514469 wait complete. 2026-03-10T13:37:52.121 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f173adae640 1 Processor -- start 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f173adae640 1 -- start start 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f173adae640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f1734078010 con 0x7f1734104990 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f173adae640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f17340781e0 con 0x7f173410cad0 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f173adae640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f17340783b0 con 0x7f1734108dc0 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f1738b23640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f1734104990 0x7f173407adc0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:34962/0 (socket says 192.168.123.109:34962) 2026-03-10T13:37:52.122 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f1738b23640 1 -- 192.168.123.109:0/3337707161 learned_addr learned my addr 192.168.123.109:0/3337707161 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.122+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 56726843 0 0) 0x7f1734078010 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1708003620 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 711113799 0 0) 0x7f17340783b0 con 0x7f1734108dc0 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1734078010 con 0x7f1734108dc0 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3669613213 0 0) 0x7f17340781e0 con 0x7f173410cad0 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f17340783b0 con 0x7f173410cad0 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1229003046 0 0) 0x7f1708003620 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f17340781e0 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f171c003200 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2404802981 0 0) 0x7f17340781e0 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 >> v1:192.168.123.105:6790/0 conn(0x7f1734108dc0 legacy=0x7f1734077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 >> v1:192.168.123.109:6789/0 conn(0x7f173410cad0 legacy=0x7f17341aa530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f17341aec80 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f173adae640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f17341abc50 con 0x7f1734104990 2026-03-10T13:37:52.123 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.123+0000 7f173adae640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f17341ac230 con 0x7f1734104990 2026-03-10T13:37:52.124 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.124+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f171c003680 con 0x7f1734104990 2026-03-10T13:37:52.124 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.124+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f171c004fe0 con 0x7f1734104990 2026-03-10T13:37:52.125 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.125+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f171c003870 con 0x7f1734104990 2026-03-10T13:37:52.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.126+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(53..53 src has 1..53) ==== 5529+0+0 (unknown 3242474439 0 0) 0x7f171c094e60 con 0x7f1734104990 2026-03-10T13:37:52.126 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.126+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f17341124c0 con 0x7f1734104990 2026-03-10T13:37:52.129 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.129+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f171c05e0c0 con 0x7f1734104990 2026-03-10T13:37:52.231 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.231+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}) -- 0x7f17340630c0 con 0x7f1708078260 2026-03-10T13:37:52.239 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled node-exporter update... 2026-03-10T13:37:52.239 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.239+0000 7f1729ffb640 1 -- 192.168.123.109:0/3337707161 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (unknown 0 0 240551134) 0x7f17340630c0 con 0x7f1708078260 2026-03-10T13:37:52.241 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.241+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 >> v1:192.168.123.105:6800/3845654103 conn(0x7f1708078260 legacy=0x7f170807a720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.241 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.241+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 >> v1:192.168.123.105:6789/0 conn(0x7f1734104990 legacy=0x7f173407adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.241 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.242+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 shutdown_connections 2026-03-10T13:37:52.241 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.242+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 >> 192.168.123.109:0/3337707161 conn(0x7f1734100120 msgr2=0x7f1734109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:52.242 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.242+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 shutdown_connections 2026-03-10T13:37:52.242 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.242+0000 7f16fbfff640 1 -- 192.168.123.109:0/3337707161 wait complete. 2026-03-10T13:37:52.372 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: pgmap v104: 68 pgs: 36 active+clean, 11 creating+peering, 21 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.14505 v1:192.168.123.109:0/2298625354' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.373 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:52 vm09 ceph-mon[53367]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T13:37:52.403 DEBUG:teuthology.orchestra.run.vm05:node-exporter.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.a.service 2026-03-10T13:37:52.405 DEBUG:teuthology.orchestra.run.vm09:node-exporter.b> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.b.service 2026-03-10T13:37:52.406 INFO:tasks.cephadm:Adding alertmanager.a on vm05 2026-03-10T13:37:52.406 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply alertmanager '1;vm05=a' 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: pgmap v104: 68 pgs: 36 active+clean, 11 creating+peering, 21 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.14505 v1:192.168.123.109:0/2298625354' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[58955]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: pgmap v104: 68 pgs: 36 active+clean, 11 creating+peering, 21 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.14505 v1:192.168.123.109:0/2298625354' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.109", "placement": "1;vm09=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: Saving service iscsi.datapool spec with placement vm09=iscsi.a;count:1 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: osdmap e53: 8 total, 8 up, 8 in 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T13:37:52.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:52 vm05 ceph-mon[51512]: osdmap e54: 8 total, 8 up, 8 in 2026-03-10T13:37:52.620 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:52.775 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.775+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1262486997 >> v1:192.168.123.109:6789/0 conn(0x7fdaec108dc0 legacy=0x7fdaec10b210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.775+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1262486997 shutdown_connections 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.775+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1262486997 >> 192.168.123.109:0/1262486997 conn(0x7fdaec100120 msgr2=0x7fdaec102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.775+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1262486997 shutdown_connections 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1262486997 wait complete. 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 Processor -- start 2026-03-10T13:37:52.776 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 -- start start 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdaec19ca30 con 0x7fdaec10cad0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdaec1a81f0 con 0x7fdaec108dc0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.776+0000 7fdaf2b6a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdaec1a93d0 con 0x7fdaec104990 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae3fff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fdaec108dc0 0x7fdaec1a2390 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.109:43170/0 (socket says 192.168.123.109:43170) 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae3fff640 1 -- 192.168.123.109:0/1720446039 learned_addr learned my addr 192.168.123.109:0/1720446039 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4006068927 0 0) 0x7fdaec1a81f0 con 0x7fdaec108dc0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fdac4003620 con 0x7fdaec108dc0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3197947365 0 0) 0x7fdaec19ca30 con 0x7fdaec10cad0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fdaec1a81f0 con 0x7fdaec10cad0 2026-03-10T13:37:52.777 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3786000525 0 0) 0x7fdaec1a81f0 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdaec19ca30 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdae8003400 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.777+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3832908672 0 0) 0x7fdac4003620 con 0x7fdaec108dc0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdaec1a81f0 con 0x7fdaec108dc0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdadc003280 con 0x7fdaec108dc0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2090519044 0 0) 0x7fdaec19ca30 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 >> v1:192.168.123.105:6790/0 conn(0x7fdaec104990 legacy=0x7fdaec19beb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 >> v1:192.168.123.109:6789/0 conn(0x7fdaec108dc0 legacy=0x7fdaec1a2390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdaec1aa5b0 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fdaec1a7240 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fdae8003ee0 con 0x7fdaec10cad0 2026-03-10T13:37:52.778 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fdaec1a7820 con 0x7fdaec10cad0 2026-03-10T13:37:52.779 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.778+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdae8004db0 con 0x7fdaec10cad0 2026-03-10T13:37:52.780 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.780+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fdab4005180 con 0x7fdaec10cad0 2026-03-10T13:37:52.780 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.780+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7fdae8003a90 con 0x7fdaec10cad0 2026-03-10T13:37:52.781 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.781+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(54..54 src has 1..54) ==== 5540+0+0 (unknown 93706720 0 0) 0x7fdae8093f40 con 0x7fdaec10cad0 2026-03-10T13:37:52.784 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.784+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fdae805d190 con 0x7fdaec10cad0 2026-03-10T13:37:52.884 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:52.884+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}) -- 0x7fdab4002bf0 con 0x7fdac40781c0 2026-03-10T13:37:53.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.038+0000 7fdae1ffb640 1 -- 192.168.123.109:0/1720446039 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (unknown 0 0 1850065467) 0x7fdab4002bf0 con 0x7fdac40781c0 2026-03-10T13:37:53.039 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled alertmanager update... 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.041+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 >> v1:192.168.123.105:6800/3845654103 conn(0x7fdac40781c0 legacy=0x7fdac407a680 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.041+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 >> v1:192.168.123.105:6789/0 conn(0x7fdaec10cad0 legacy=0x7fdaec1a5ac0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.041+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 shutdown_connections 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.041+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 >> 192.168.123.109:0/1720446039 conn(0x7fdaec100120 msgr2=0x7fdaec109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.041+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 shutdown_connections 2026-03-10T13:37:53.041 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.042+0000 7fdaf2b6a640 1 -- 192.168.123.109:0/1720446039 wait complete. 2026-03-10T13:37:53.195 DEBUG:teuthology.orchestra.run.vm05:alertmanager.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@alertmanager.a.service 2026-03-10T13:37:53.197 INFO:tasks.cephadm:Adding grafana.a on vm09 2026-03-10T13:37:53.197 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph orch apply grafana '1;vm09=a' 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='client.14508 v1:192.168.123.109:0/2395635039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='client.14514 v1:192.168.123.109:0/3337707161' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.296 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:53 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.389 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='client.14508 v1:192.168.123.109:0/2395635039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='client.14514 v1:192.168.123.109:0/3337707161' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='client.14508 v1:192.168.123.109:0/2395635039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: Saving service prometheus spec with placement vm09=a;count:1 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='client.14514 v1:192.168.123.109:0/3337707161' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm05=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: Saving service node-exporter spec with placement vm05=a;vm09=b;count:2 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: osdmap e55: 8 total, 8 up, 8 in 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:53 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T13:37:53.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.596+0000 7fb3510eb640 1 -- 192.168.123.109:0/4032480684 >> v1:192.168.123.109:6789/0 conn(0x7fb34c104990 legacy=0x7fb34c104d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.597+0000 7fb3510eb640 1 -- 192.168.123.109:0/4032480684 shutdown_connections 2026-03-10T13:37:53.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.597+0000 7fb3510eb640 1 -- 192.168.123.109:0/4032480684 >> 192.168.123.109:0/4032480684 conn(0x7fb34c100120 msgr2=0x7fb34c102560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:53.597 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.597+0000 7fb3510eb640 1 -- 192.168.123.109:0/4032480684 shutdown_connections 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.597+0000 7fb3510eb640 1 -- 192.168.123.109:0/4032480684 wait complete. 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb3510eb640 1 Processor -- start 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb3510eb640 1 -- start start 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb3510eb640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb34c078010 con 0x7fb34c108dc0 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb3510eb640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb34c0781e0 con 0x7fb34c10cad0 2026-03-10T13:37:53.598 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb3510eb640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb34c0783b0 con 0x7fb34c104990 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb34a575640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fb34c108dc0 0x7fb34c077900 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.109:35016/0 (socket says 192.168.123.109:35016) 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.598+0000 7fb34a575640 1 -- 192.168.123.109:0/2458843306 learned_addr learned my addr 192.168.123.109:0/2458843306 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2743090626 0 0) 0x7fb34c078010 con 0x7fb34c108dc0 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb30c003620 con 0x7fb34c108dc0 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3435813188 0 0) 0x7fb34c0781e0 con 0x7fb34c10cad0 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb34c078010 con 0x7fb34c10cad0 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2040452439 0 0) 0x7fb34c0783b0 con 0x7fb34c104990 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb34c0781e0 con 0x7fb34c104990 2026-03-10T13:37:53.599 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3587688505 0 0) 0x7fb34c078010 con 0x7fb34c10cad0 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb34c0783b0 con 0x7fb34c10cad0 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.599+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3882236013 0 0) 0x7fb34c0781e0 con 0x7fb34c104990 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb34c078010 con 0x7fb34c104990 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb33c003340 con 0x7fb34c10cad0 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb3340027b0 con 0x7fb34c104990 2026-03-10T13:37:53.600 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 894304183 0 0) 0x7fb30c003620 con 0x7fb34c108dc0 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb34c0781e0 con 0x7fb34c108dc0 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2683338876 0 0) 0x7fb34c0783b0 con 0x7fb34c10cad0 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 >> v1:192.168.123.105:6790/0 conn(0x7fb34c104990 legacy=0x7fb34c07adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 >> v1:192.168.123.105:6789/0 conn(0x7fb34c108dc0 legacy=0x7fb34c077900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb34c1aec80 con 0x7fb34c10cad0 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.600+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb34c1abc50 con 0x7fb34c10cad0 2026-03-10T13:37:53.601 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.601+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb34c1ac1e0 con 0x7fb34c10cad0 2026-03-10T13:37:53.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.602+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fb33c003d40 con 0x7fb34c10cad0 2026-03-10T13:37:53.602 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.602+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb310005180 con 0x7fb34c10cad0 2026-03-10T13:37:53.603 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.602+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb33c004c60 con 0x7fb34c10cad0 2026-03-10T13:37:53.607 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.603+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7fb33c0038f0 con 0x7fb34c10cad0 2026-03-10T13:37:53.607 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.605+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(55..55 src has 1..55) ==== 5895+0+0 (unknown 2841444271 0 0) 0x7fb33c058a20 con 0x7fb34c10cad0 2026-03-10T13:37:53.607 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.607+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fb33c05ce10 con 0x7fb34c10cad0 2026-03-10T13:37:53.710 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.710+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}) -- 0x7fb310002bf0 con 0x7fb30c0785f0 2026-03-10T13:37:53.719 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.719+0000 7fb32bfff640 1 -- 192.168.123.109:0/2458843306 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (unknown 0 0 664801700) 0x7fb310002bf0 con 0x7fb30c0785f0 2026-03-10T13:37:53.719 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled grafana update... 2026-03-10T13:37:53.721 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.721+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 >> v1:192.168.123.105:6800/3845654103 conn(0x7fb30c0785f0 legacy=0x7fb30c07aab0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.722+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 >> v1:192.168.123.109:6789/0 conn(0x7fb34c10cad0 legacy=0x7fb34c1aa530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:53.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.722+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 shutdown_connections 2026-03-10T13:37:53.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.722+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 >> 192.168.123.109:0/2458843306 conn(0x7fb34c100120 msgr2=0x7fb34c109200 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:53.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.722+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 shutdown_connections 2026-03-10T13:37:53.722 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:53.722+0000 7fb3510eb640 1 -- 192.168.123.109:0/2458843306 wait complete. 2026-03-10T13:37:53.896 DEBUG:teuthology.orchestra.run.vm09:grafana.a> sudo journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@grafana.a.service 2026-03-10T13:37:53.898 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T13:37:53.898 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T13:37:54.066 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:54.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.215+0000 7f789dbfb640 1 -- 192.168.123.105:0/365576366 >> v1:192.168.123.105:6789/0 conn(0x7f7898106b10 legacy=0x7f7898108f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 -- 192.168.123.105:0/365576366 shutdown_connections 2026-03-10T13:37:54.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 -- 192.168.123.105:0/365576366 >> 192.168.123.105:0/365576366 conn(0x7f78980fde70 msgr2=0x7f78981002b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:54.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 -- 192.168.123.105:0/365576366 shutdown_connections 2026-03-10T13:37:54.217 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 -- 192.168.123.105:0/365576366 wait complete. 2026-03-10T13:37:54.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 Processor -- start 2026-03-10T13:37:54.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.217+0000 7f789dbfb640 1 -- start start 2026-03-10T13:37:54.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f789dbfb640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f78981001d0 con 0x7f7898106b10 2026-03-10T13:37:54.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f789dbfb640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f78981a7ef0 con 0x7f78981026e0 2026-03-10T13:37:54.218 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f789dbfb640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f78981a90d0 con 0x7f789810a820 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f7896ffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7898106b10 0x7f78981a2090 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:54902/0 (socket says 192.168.123.105:54902) 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f7896ffd640 1 -- 192.168.123.105:0/2758845318 learned_addr learned my addr 192.168.123.105:0/2758845318 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3411149680 0 0) 0x7f78981001d0 con 0x7f7898106b10 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.218+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f786c003620 con 0x7f7898106b10 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 488664519 0 0) 0x7f786c003620 con 0x7f7898106b10 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f78981001d0 con 0x7f7898106b10 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7888004140 con 0x7f7898106b10 2026-03-10T13:37:54.219 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1067443836 0 0) 0x7f78981001d0 con 0x7f7898106b10 2026-03-10T13:37:54.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 >> v1:192.168.123.105:6790/0 conn(0x7f789810a820 legacy=0x7f78981a57c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 >> v1:192.168.123.109:6789/0 conn(0x7f78981026e0 legacy=0x7f78980ff650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f78981aa2b0 con 0x7f7898106b10 2026-03-10T13:37:54.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f789dbfb640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f78981a8120 con 0x7f7898106b10 2026-03-10T13:37:54.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.219+0000 7f789dbfb640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f78981a86b0 con 0x7f7898106b10 2026-03-10T13:37:54.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.221+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7888004800 con 0x7f7898106b10 2026-03-10T13:37:54.224 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.221+0000 7f789dbfb640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f785c005180 con 0x7f7898106b10 2026-03-10T13:37:54.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.222+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7888004de0 con 0x7f7898106b10 2026-03-10T13:37:54.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.222+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f7888005080 con 0x7f7898106b10 2026-03-10T13:37:54.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.222+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(55..55 src has 1..55) ==== 5895+0+0 (unknown 2841444271 0 0) 0x7f7888096050 con 0x7f7898106b10 2026-03-10T13:37:54.226 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.224+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f788805f1c0 con 0x7f7898106b10 2026-03-10T13:37:54.371 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.371+0000 7f789dbfb640 1 -- 192.168.123.105:0/2758845318 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f785c005470 con 0x7f7898106b10 2026-03-10T13:37:54.391 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.391+0000 7f7894ff9640 1 -- 192.168.123.105:0/2758845318 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v15) ==== 170+0+59 (unknown 326183931 0 2887525312) 0x7f7888062e70 con 0x7f7898106b10 2026-03-10T13:37:54.393 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-10T13:37:54.393 INFO:teuthology.orchestra.run.vm05.stdout: key = AQCyHrBptjI9FhAAP60ge6ROfnN5ndruVQ8hSQ== 2026-03-10T13:37:54.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.393+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 >> v1:192.168.123.105:6800/3845654103 conn(0x7f786c0782f0 legacy=0x7f786c07a7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.393 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.393+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 >> v1:192.168.123.105:6789/0 conn(0x7f7898106b10 legacy=0x7f78981a2090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.393+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 shutdown_connections 2026-03-10T13:37:54.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.393+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 >> 192.168.123.105:0/2758845318 conn(0x7f78980fde70 msgr2=0x7f78981096c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:54.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.394+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 shutdown_connections 2026-03-10T13:37:54.394 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:54.394+0000 7f78727fc640 1 -- 192.168.123.105:0/2758845318 wait complete. 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[51512]: pgmap v107: 100 pgs: 44 active+clean, 16 creating+peering, 40 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[51512]: from='client.14520 v1:192.168.123.109:0/1720446039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[51512]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[58955]: pgmap v107: 100 pgs: 44 active+clean, 16 creating+peering, 40 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[58955]: from='client.14520 v1:192.168.123.109:0/1720446039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[58955]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T13:37:54.517 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:54 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:54.541 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:37:54.541 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T13:37:54.541 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T13:37:54.582 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T13:37:54.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:54 vm09 ceph-mon[53367]: pgmap v107: 100 pgs: 44 active+clean, 16 creating+peering, 40 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:37:54.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:54 vm09 ceph-mon[53367]: from='client.14520 v1:192.168.123.109:0/1720446039' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:54.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:54 vm09 ceph-mon[53367]: Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T13:37:54.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:54 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:54.765 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.b/config 2026-03-10T13:37:54.904 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.904+0000 7f427fa66640 1 -- 192.168.123.109:0/1156425822 >> v1:192.168.123.109:6789/0 conn(0x7f427810cab0 legacy=0x7f427810ef70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.904 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.904+0000 7f427fa66640 1 -- 192.168.123.109:0/1156425822 shutdown_connections 2026-03-10T13:37:54.904 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.904+0000 7f427fa66640 1 -- 192.168.123.109:0/1156425822 >> 192.168.123.109:0/1156425822 conn(0x7f4278100120 msgr2=0x7f4278102540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:54.905 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.904+0000 7f427fa66640 1 -- 192.168.123.109:0/1156425822 shutdown_connections 2026-03-10T13:37:54.905 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 -- 192.168.123.109:0/1156425822 wait complete. 2026-03-10T13:37:54.905 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 Processor -- start 2026-03-10T13:37:54.905 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 -- start start 2026-03-10T13:37:54.905 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f427819c830 con 0x7f4278108da0 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f42781a7ff0 con 0x7f4278104970 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.905+0000 7f427fa66640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f42781a91d0 con 0x7f427810cab0 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f427dfdc640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f427810cab0 0x7f42781a58c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.109:53306/0 (socket says 192.168.123.109:53306) 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f427dfdc640 1 -- 192.168.123.109:0/845040118 learned_addr learned my addr 192.168.123.109:0/845040118 (peer_addr_for_me v1:192.168.123.109:0/0) 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2683020578 0 0) 0x7f42781a91d0 con 0x7f427810cab0 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f4254003620 con 0x7f427810cab0 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 485296023 0 0) 0x7f427819c830 con 0x7f4278108da0 2026-03-10T13:37:54.906 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f42781a91d0 con 0x7f4278108da0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3020912027 0 0) 0x7f42781a7ff0 con 0x7f4278104970 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f427819c830 con 0x7f4278104970 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1828284932 0 0) 0x7f4254003620 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f42781a7ff0 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4087382804 0 0) 0x7f42781a91d0 con 0x7f4278108da0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f4254003620 con 0x7f4278108da0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4274004850 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.906+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4268003100 con 0x7f4278108da0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.907+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2928791854 0 0) 0x7f42781a7ff0 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.907+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 >> v1:192.168.123.109:6789/0 conn(0x7f4278104970 legacy=0x7f427819bcb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.907+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 >> v1:192.168.123.105:6789/0 conn(0x7f4278108da0 legacy=0x7f42781a2190 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.907+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f42781aa3b0 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.907+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f4274004120 con 0x7f427810cab0 2026-03-10T13:37:54.907 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.908+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f42781a7040 con 0x7f427810cab0 2026-03-10T13:37:54.908 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.908+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f42781a7620 con 0x7f427810cab0 2026-03-10T13:37:54.908 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.908+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f4274005380 con 0x7f427810cab0 2026-03-10T13:37:54.909 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.909+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4278111ba0 con 0x7f427810cab0 2026-03-10T13:37:54.909 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.909+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f42740042d0 con 0x7f427810cab0 2026-03-10T13:37:54.910 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.910+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(56..56 src has 1..56) ==== 5906+0+0 (unknown 1876954169 0 0) 0x7f4274095350 con 0x7f427810cab0 2026-03-10T13:37:54.913 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:54.913+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f427405e430 con 0x7f427810cab0 2026-03-10T13:37:55.039 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.039+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f42781a7b90 con 0x7f427810cab0 2026-03-10T13:37:55.046 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.046+0000 7f42667fc640 1 -- 192.168.123.109:0/845040118 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (unknown 3526050746 0 1753137924) 0x7f42740620e0 con 0x7f427810cab0 2026-03-10T13:37:55.046 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-10T13:37:55.046 INFO:teuthology.orchestra.run.vm09.stdout: key = AQCzHrBp6tlvAhAAlIQg5dffXcYgJnMacUs2CQ== 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 >> v1:192.168.123.105:6800/3845654103 conn(0x7f42540780d0 legacy=0x7f425407a590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 >> v1:192.168.123.105:6790/0 conn(0x7f427810cab0 legacy=0x7f42781a58c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 shutdown_connections 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 >> 192.168.123.109:0/845040118 conn(0x7f4278100120 msgr2=0x7f427810b1f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 shutdown_connections 2026-03-10T13:37:55.049 INFO:teuthology.orchestra.run.vm09.stderr:2026-03-10T13:37:55.049+0000 7f427fa66640 1 -- 192.168.123.109:0/845040118 wait complete. 2026-03-10T13:37:55.475 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T13:37:55.475 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T13:37:55.475 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T13:37:55.524 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T13:37:55.524 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T13:37:55.525 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph mgr dump --format=json 2026-03-10T13:37:55.778 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.24421 v1:192.168.123.109:0/2458843306' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: Saving service grafana spec with placement vm09=a;count:1 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/845040118' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.24437 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[58955]: from='client.24437 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.24421 v1:192.168.123.109:0/2458843306' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: Saving service grafana spec with placement vm09=a;count:1 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/845040118' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.24437 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.818 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-mon[51512]: from='client.24437 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.818 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 13:37:55 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a[83421]: 2026-03-10T13:37:55.513+0000 7f6b59c2f980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.24421 v1:192.168.123.109:0/2458843306' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm09=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: Saving service grafana spec with placement vm09=a;count:1 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: osdmap e56: 8 total, 8 up, 8 in 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2494514875' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T13:37:55.925 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2758845318' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.925 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/845040118' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.925 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.24437 ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T13:37:55.925 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:55 vm09 ceph-mon[53367]: from='client.24437 ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T13:37:55.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.949+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2235364708 >> v1:192.168.123.105:6789/0 conn(0x7f26d010fdc0 legacy=0x7f26d0110270 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:55.950 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.950+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2235364708 shutdown_connections 2026-03-10T13:37:55.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.950+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2235364708 >> 192.168.123.105:0/2235364708 conn(0x7f26d006d730 msgr2=0x7f26d006db40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:55.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.950+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2235364708 shutdown_connections 2026-03-10T13:37:55.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.950+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2235364708 wait complete. 2026-03-10T13:37:55.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.951+0000 7f26d7b1a640 1 Processor -- start 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.951+0000 7f26d7b1a640 1 -- start start 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.951+0000 7f26d7b1a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f26d011e030 con 0x7f26d0074250 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.951+0000 7f26d7b1a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f26d011e200 con 0x7f26d010fdc0 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.951+0000 7f26d7b1a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f26d011e3d0 con 0x7f26d011e790 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26d6090640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f26d011e790 0x7f26d011d920 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:41010/0 (socket says 192.168.123.105:41010) 2026-03-10T13:37:55.952 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26d6090640 1 -- 192.168.123.105:0/2458719331 learned_addr learned my addr 192.168.123.105:0/2458719331 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:55.953 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 426155769 0 0) 0x7f26d011e3d0 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f26ac003620 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 71617600 0 0) 0x7f26d011e030 con 0x7f26d0074250 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.952+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f26d011e3d0 con 0x7f26d0074250 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3559667422 0 0) 0x7f26ac003620 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f26d011e030 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f26c4003190 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3421072130 0 0) 0x7f26d011e030 con 0x7f26d011e790 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 >> v1:192.168.123.109:6789/0 conn(0x7f26d010fdc0 legacy=0x7f26d011d210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 >> v1:192.168.123.105:6789/0 conn(0x7f26d0074250 legacy=0x7f26d01174f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:55.954 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f26d011e5a0 con 0x7f26d011e790 2026-03-10T13:37:55.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f26c4003be0 con 0x7f26d011e790 2026-03-10T13:37:55.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f26d01c1a30 con 0x7f26d011e790 2026-03-10T13:37:55.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.953+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f26d01c1fe0 con 0x7f26d011e790 2026-03-10T13:37:55.955 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.954+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f26c4003930 con 0x7f26d011e790 2026-03-10T13:37:55.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.954+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f26d010a680 con 0x7f26d011e790 2026-03-10T13:37:55.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.956+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f26c4003f10 con 0x7f26d011e790 2026-03-10T13:37:55.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.956+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f26c4059cf0 con 0x7f26d011e790 2026-03-10T13:37:55.958 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:55.958+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f26c405e0e0 con 0x7f26d011e790 2026-03-10T13:37:56.106 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.105+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7f26d0124080 con 0x7f26d011e790 2026-03-10T13:37:56.107 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.107+0000 7f26beffd640 1 -- 192.168.123.105:0/2458719331 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v15) ==== 74+0+191979 (unknown 170547878 0 894962801) 0x7f26c4061d90 con 0x7f26d011e790 2026-03-10T13:37:56.108 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:56.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 >> v1:192.168.123.105:6800/3845654103 conn(0x7f26ac078170 legacy=0x7f26ac07a630 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 >> v1:192.168.123.105:6790/0 conn(0x7f26d011e790 legacy=0x7f26d011d920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 shutdown_connections 2026-03-10T13:37:56.111 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 >> 192.168.123.105:0/2458719331 conn(0x7f26d006d730 msgr2=0x7f26d00717d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:56.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 shutdown_connections 2026-03-10T13:37:56.112 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.111+0000 7f26d7b1a640 1 -- 192.168.123.105:0/2458719331 wait complete. 2026-03-10T13:37:56.266 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"y","active_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":3845654103}]},"active_addr":"192.168.123.105:6800/3845654103","active_change":"2026-03-10T13:35:46.531332+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14208,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.105:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2679078972}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":1417784784}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":610822261}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":354009470}]}]} 2026-03-10T13:37:56.268 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T13:37:56.268 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T13:37:56.268 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd dump --format=json 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: pgmap v110: 132 pgs: 82 active+clean, 23 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2458719331' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: pgmap v110: 132 pgs: 82 active+clean, 23 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.458 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:56.459 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2458719331' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:37:56.461 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:56.521 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: pgmap v110: 132 pgs: 82 active+clean, 23 creating+peering, 27 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:37:56.521 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/480736044' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.521 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='client.24401 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: osdmap e57: 8 total, 8 up, 8 in 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: osdmap e58: 8 total, 8 up, 8 in 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:37:56.522 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2458719331' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.665+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1369318921 >> v1:192.168.123.105:6789/0 conn(0x7f0f7c10f0e0 legacy=0x7f0f7c1115a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1369318921 shutdown_connections 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1369318921 >> 192.168.123.105:0/1369318921 conn(0x7f0f7c0fe3d0 msgr2=0x7f0f7c1007f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1369318921 shutdown_connections 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1369318921 wait complete. 2026-03-10T13:37:56.666 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 Processor -- start 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.666+0000 7f0f80c3b640 1 -- start start 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f80c3b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0f7c1abab0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f80c3b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0f7c1accb0 con 0x7f0f7c108690 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f80c3b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f0f7c1adeb0 con 0x7f0f7c10b540 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f7a575640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f0f7c108690 0x7f0f7c10e950 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41842/0 (socket says 192.168.123.105:41842) 2026-03-10T13:37:56.667 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f7a575640 1 -- 192.168.123.105:0/1313933392 learned_addr learned my addr 192.168.123.105:0/1313933392 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4018674182 0 0) 0x7f0f7c1accb0 con 0x7f0f7c108690 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.667+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0f4c003620 con 0x7f0f7c108690 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2297850413 0 0) 0x7f0f7c1abab0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0f7c1accb0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 993922029 0 0) 0x7f0f7c1adeb0 con 0x7f0f7c10b540 2026-03-10T13:37:56.668 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0f7c1abab0 con 0x7f0f7c10b540 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1459280264 0 0) 0x7f0f7c1accb0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0f7c1adeb0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.668+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 859236395 0 0) 0x7f0f4c003620 con 0x7f0f7c108690 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0f7c1accb0 con 0x7f0f7c108690 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2165990670 0 0) 0x7f0f7c1abab0 con 0x7f0f7c10b540 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0f4c003620 con 0x7f0f7c10b540 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0f6c002ef0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0f64003100 con 0x7f0f7c108690 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0f70002f80 con 0x7f0f7c10b540 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1918603371 0 0) 0x7f0f7c1adeb0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.669 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 >> v1:192.168.123.105:6790/0 conn(0x7f0f7c10b540 legacy=0x7f0f7c1a68c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 >> v1:192.168.123.109:6789/0 conn(0x7f0f7c108690 legacy=0x7f0f7c10e950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.669+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0f7c1af0b0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.670 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.670+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0f6c004230 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.670+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f0f7c1ae0e0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.670+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f0f7c1ae640 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.670+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0f48005180 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.671+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0f6c005060 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.673+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f0f6c003de0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.674 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.674+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f0f6c093d10 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.675 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.674+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f0f6c094150 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.767+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f0f48005470 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.769 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.768+0000 7f0f637fe640 1 -- 192.168.123.105:0/1313933392 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v58) ==== 74+0+20894 (unknown 2626357731 0 1322912952) 0x7f0f6c05cde0 con 0x7f0f7c10f0e0 2026-03-10T13:37:56.770 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:56.770 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":58,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","created":"2026-03-10T13:35:24.006116+0000","modified":"2026-03-10T13:37:55.583739+0000","last_up_change":"2026-03-10T13:37:43.799387+0000","last_in_change":"2026-03-10T13:37:32.453876+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:36:50.580400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-10T13:37:46.774294+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":52,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-10T13:37:47.158899+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"51","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-10T13:37:48.350955+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T13:37:50.296362+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T13:37:52.417999+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"27418eee-abb2-4d75-aadf-ed68d081290c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":3141950523}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":3141950523}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":3141950523}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":3141950523}]},"public_addr":"192.168.123.105:6801/3141950523","cluster_addr":"192.168.123.105:6802/3141950523","heartbeat_back_addr":"192.168.123.105:6804/3141950523","heartbeat_front_addr":"192.168.123.105:6803/3141950523","state":["exists","up"]},{"osd":1,"uuid":"f512d6be-c3f7-4742-a120-ab1907d08ac3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":1936282018}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":1936282018}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":1936282018}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":1936282018}]},"public_addr":"192.168.123.105:6805/1936282018","cluster_addr":"192.168.123.105:6806/1936282018","heartbeat_back_addr":"192.168.123.105:6808/1936282018","heartbeat_front_addr":"192.168.123.105:6807/1936282018","state":["exists","up"]},{"osd":2,"uuid":"a686b53f-59af-40c9-a5d6-bde07754c934","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":3999426341}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":3999426341}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":3999426341}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":3999426341}]},"public_addr":"192.168.123.105:6809/3999426341","cluster_addr":"192.168.123.105:6810/3999426341","heartbeat_back_addr":"192.168.123.105:6812/3999426341","heartbeat_front_addr":"192.168.123.105:6811/3999426341","state":["exists","up"]},{"osd":3,"uuid":"e9aa7ce5-7d1a-4946-9551-10bfc47bd58b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":693788844}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":693788844}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":693788844}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":693788844}]},"public_addr":"192.168.123.105:6813/693788844","cluster_addr":"192.168.123.105:6814/693788844","heartbeat_back_addr":"192.168.123.105:6816/693788844","heartbeat_front_addr":"192.168.123.105:6815/693788844","state":["exists","up"]},{"osd":4,"uuid":"72d4c584-8c2a-4a71-a3f3-b3a23f142206","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":3898346219}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":3898346219}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":3898346219}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":3898346219}]},"public_addr":"192.168.123.109:6800/3898346219","cluster_addr":"192.168.123.109:6801/3898346219","heartbeat_back_addr":"192.168.123.109:6803/3898346219","heartbeat_front_addr":"192.168.123.109:6802/3898346219","state":["exists","up"]},{"osd":5,"uuid":"dba319a5-a2e5-417f-b334-ac4bdbd6a2aa","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":452558008}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":452558008}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":452558008}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":452558008}]},"public_addr":"192.168.123.109:6804/452558008","cluster_addr":"192.168.123.109:6805/452558008","heartbeat_back_addr":"192.168.123.109:6807/452558008","heartbeat_front_addr":"192.168.123.109:6806/452558008","state":["exists","up"]},{"osd":6,"uuid":"afe42148-806c-4ff6-9729-634661c10d48","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":53,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":354656606}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":354656606}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":354656606}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":354656606}]},"public_addr":"192.168.123.109:6808/354656606","cluster_addr":"192.168.123.109:6809/354656606","heartbeat_back_addr":"192.168.123.109:6811/354656606","heartbeat_front_addr":"192.168.123.109:6810/354656606","state":["exists","up"]},{"osd":7,"uuid":"902bce05-1aee-4630-a57d-74b141285652","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":3977889858}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":3977889858}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":3977889858}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":3977889858}]},"public_addr":"192.168.123.109:6812/3977889858","cluster_addr":"192.168.123.109:6813/3977889858","heartbeat_back_addr":"192.168.123.109:6815/3977889858","heartbeat_front_addr":"192.168.123.109:6814/3977889858","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:24.966724+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:36.004110+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:46.557144+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:58.013466+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:08.845599+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:19.063608+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:30.636274+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:42.220401+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/3712072921":"2026-03-11T13:35:46.531042+0000","192.168.123.105:6800/3334108074":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/2337127528":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3792932241":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1473752177":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3043025705":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1619687969":"2026-03-11T13:35:35.615869+0000","192.168.123.105:6800/1920070151":"2026-03-11T13:35:35.615869+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:37:56.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.771+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 >> v1:192.168.123.105:6800/3845654103 conn(0x7f0f4c0812c0 legacy=0x7f0f4c083780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.772+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 >> v1:192.168.123.105:6789/0 conn(0x7f0f7c10f0e0 legacy=0x7f0f7c1aa180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:56.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.772+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 shutdown_connections 2026-03-10T13:37:56.772 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.772+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 >> 192.168.123.105:0/1313933392 conn(0x7f0f7c0fe3d0 msgr2=0x7f0f7c112520 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:56.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.772+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 shutdown_connections 2026-03-10T13:37:56.773 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:56.772+0000 7f0f80c3b640 1 -- 192.168.123.105:0/1313933392 wait complete. 2026-03-10T13:37:56.922 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T13:37:56.923 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd dump --format=json 2026-03-10T13:37:56.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:56 vm09 systemd[1]: Starting Ceph iscsi.iscsi.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:37:57.124 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:56 vm09 podman[78154]: 2026-03-10 13:37:56.981469281 +0000 UTC m=+0.070326538 container create f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 podman[78154]: 2026-03-10 13:37:57.012903571 +0000 UTC m=+0.101760828 container init f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 podman[78154]: 2026-03-10 13:37:57.018442087 +0000 UTC m=+0.107299333 container start f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2) 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 bash[78154]: f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 podman[78154]: 2026-03-10 13:37:56.922054926 +0000 UTC m=+0.010912194 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:37:57.236 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 systemd[1]: Started Ceph iscsi.iscsi.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:37:57.255 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.254+0000 7f637ac20640 1 -- 192.168.123.105:0/3703273418 >> v1:192.168.123.109:6789/0 conn(0x7f636c0074f0 legacy=0x7f636c009970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:57.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.254+0000 7f637ac20640 1 -- 192.168.123.105:0/3703273418 shutdown_connections 2026-03-10T13:37:57.256 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.254+0000 7f637ac20640 1 -- 192.168.123.105:0/3703273418 >> 192.168.123.105:0/3703273418 conn(0x7f636c01a440 msgr2=0x7f636c01a850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:57.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.255+0000 7f637ac20640 1 -- 192.168.123.105:0/3703273418 shutdown_connections 2026-03-10T13:37:57.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.255+0000 7f637ac20640 1 -- 192.168.123.105:0/3703273418 wait complete. 2026-03-10T13:37:57.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637ac20640 1 Processor -- start 2026-03-10T13:37:57.257 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637ac20640 1 -- start start 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637ac20640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f636c0a5820 con 0x7f636c0aa780 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637ac20640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f636c166920 con 0x7f636c00b180 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637ac20640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f636c167b00 con 0x7f636c0d1d90 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637941d640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f636c0aa780 0x7f636c015750 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60062/0 (socket says 192.168.123.105:60062) 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.256+0000 7f637941d640 1 -- 192.168.123.105:0/1203472867 learned_addr learned my addr 192.168.123.105:0/1203472867 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.258+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3620405866 0 0) 0x7f636c0a5820 con 0x7f636c0aa780 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.258+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6360003620 con 0x7f636c0aa780 2026-03-10T13:37:57.258 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.258+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2806041266 0 0) 0x7f636c167b00 con 0x7f636c0d1d90 2026-03-10T13:37:57.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.258+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f636c0a5820 con 0x7f636c0d1d90 2026-03-10T13:37:57.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.259+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3484014903 0 0) 0x7f636c166920 con 0x7f636c00b180 2026-03-10T13:37:57.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.259+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f636c167b00 con 0x7f636c00b180 2026-03-10T13:37:57.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.259+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3423485315 0 0) 0x7f6360003620 con 0x7f636c0aa780 2026-03-10T13:37:57.259 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.259+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f636c166920 con 0x7f636c0aa780 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.259+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3798294236 0 0) 0x7f636c0a5820 con 0x7f636c0d1d90 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6360003620 con 0x7f636c0d1d90 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6374050f30 con 0x7f636c0aa780 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6364002fb0 con 0x7f636c0d1d90 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2107283622 0 0) 0x7f636c167b00 con 0x7f636c00b180 2026-03-10T13:37:57.260 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f636c0a5820 con 0x7f636c00b180 2026-03-10T13:37:57.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1631062000 0 0) 0x7f636c166920 con 0x7f636c0aa780 2026-03-10T13:37:57.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.260+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 >> v1:192.168.123.105:6790/0 conn(0x7f636c0d1d90 legacy=0x7f636c015e60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:57.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.261+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 >> v1:192.168.123.109:6789/0 conn(0x7f636c00b180 legacy=0x7f636c0a4f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:57.261 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.261+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f636c168ce0 con 0x7f636c0aa780 2026-03-10T13:37:57.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.261+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f636c166b50 con 0x7f636c0aa780 2026-03-10T13:37:57.262 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.261+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f636c1670e0 con 0x7f636c0aa780 2026-03-10T13:37:57.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.263+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6374063510 con 0x7f636c0aa780 2026-03-10T13:37:57.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.263+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f637406a810 con 0x7f636c0aa780 2026-03-10T13:37:57.263 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.263+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 15) ==== 100000+0+0 (unknown 342065640 0 0) 0x7f6374083060 con 0x7f636c0aa780 2026-03-10T13:37:57.264 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.264+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f63740fa5f0 con 0x7f636c0aa780 2026-03-10T13:37:57.265 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.265+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6344005180 con 0x7f636c0aa780 2026-03-10T13:37:57.271 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.268+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f63740c3740 con 0x7f636c0aa780 2026-03-10T13:37:57.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.368+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f6344005470 con 0x7f636c0aa780 2026-03-10T13:37:57.370 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.369+0000 7f636affd640 1 -- 192.168.123.105:0/1203472867 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v58) ==== 74+0+20894 (unknown 2626357731 0 1322912952) 0x7f63740c73f0 con 0x7f636c0aa780 2026-03-10T13:37:57.370 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:37:57.370 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":58,"fsid":"e063dc72-1c85-11f1-a098-09993c5c5b66","created":"2026-03-10T13:35:24.006116+0000","modified":"2026-03-10T13:37:55.583739+0000","last_up_change":"2026-03-10T13:37:43.799387+0000","last_in_change":"2026-03-10T13:37:32.453876+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:36:50.580400+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-10T13:37:46.774294+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":52,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-10T13:37:47.158899+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"51","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-10T13:37:48.350955+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T13:37:50.296362+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"55","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T13:37:52.417999+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"57","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"27418eee-abb2-4d75-aadf-ed68d081290c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":3141950523}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":3141950523}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":3141950523}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":3141950523}]},"public_addr":"192.168.123.105:6801/3141950523","cluster_addr":"192.168.123.105:6802/3141950523","heartbeat_back_addr":"192.168.123.105:6804/3141950523","heartbeat_front_addr":"192.168.123.105:6803/3141950523","state":["exists","up"]},{"osd":1,"uuid":"f512d6be-c3f7-4742-a120-ab1907d08ac3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":1936282018}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":1936282018}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6808","nonce":1936282018}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":1936282018}]},"public_addr":"192.168.123.105:6805/1936282018","cluster_addr":"192.168.123.105:6806/1936282018","heartbeat_back_addr":"192.168.123.105:6808/1936282018","heartbeat_front_addr":"192.168.123.105:6807/1936282018","state":["exists","up"]},{"osd":2,"uuid":"a686b53f-59af-40c9-a5d6-bde07754c934","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6809","nonce":3999426341}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6810","nonce":3999426341}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6812","nonce":3999426341}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6811","nonce":3999426341}]},"public_addr":"192.168.123.105:6809/3999426341","cluster_addr":"192.168.123.105:6810/3999426341","heartbeat_back_addr":"192.168.123.105:6812/3999426341","heartbeat_front_addr":"192.168.123.105:6811/3999426341","state":["exists","up"]},{"osd":3,"uuid":"e9aa7ce5-7d1a-4946-9551-10bfc47bd58b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6813","nonce":693788844}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6814","nonce":693788844}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6816","nonce":693788844}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6815","nonce":693788844}]},"public_addr":"192.168.123.105:6813/693788844","cluster_addr":"192.168.123.105:6814/693788844","heartbeat_back_addr":"192.168.123.105:6816/693788844","heartbeat_front_addr":"192.168.123.105:6815/693788844","state":["exists","up"]},{"osd":4,"uuid":"72d4c584-8c2a-4a71-a3f3-b3a23f142206","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6800","nonce":3898346219}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6801","nonce":3898346219}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6803","nonce":3898346219}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6802","nonce":3898346219}]},"public_addr":"192.168.123.109:6800/3898346219","cluster_addr":"192.168.123.109:6801/3898346219","heartbeat_back_addr":"192.168.123.109:6803/3898346219","heartbeat_front_addr":"192.168.123.109:6802/3898346219","state":["exists","up"]},{"osd":5,"uuid":"dba319a5-a2e5-417f-b334-ac4bdbd6a2aa","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6804","nonce":452558008}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6805","nonce":452558008}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6807","nonce":452558008}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6806","nonce":452558008}]},"public_addr":"192.168.123.109:6804/452558008","cluster_addr":"192.168.123.109:6805/452558008","heartbeat_back_addr":"192.168.123.109:6807/452558008","heartbeat_front_addr":"192.168.123.109:6806/452558008","state":["exists","up"]},{"osd":6,"uuid":"afe42148-806c-4ff6-9729-634661c10d48","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":53,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6808","nonce":354656606}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6809","nonce":354656606}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6811","nonce":354656606}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6810","nonce":354656606}]},"public_addr":"192.168.123.109:6808/354656606","cluster_addr":"192.168.123.109:6809/354656606","heartbeat_back_addr":"192.168.123.109:6811/354656606","heartbeat_front_addr":"192.168.123.109:6810/354656606","state":["exists","up"]},{"osd":7,"uuid":"902bce05-1aee-4630-a57d-74b141285652","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":46,"up_thru":55,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6812","nonce":3977889858}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6813","nonce":3977889858}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6815","nonce":3977889858}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.109:6814","nonce":3977889858}]},"public_addr":"192.168.123.109:6812/3977889858","cluster_addr":"192.168.123.109:6813/3977889858","heartbeat_back_addr":"192.168.123.109:6815/3977889858","heartbeat_front_addr":"192.168.123.109:6814/3977889858","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:24.966724+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:36.004110+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:46.557144+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:36:58.013466+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:08.845599+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:19.063608+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:30.636274+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:37:42.220401+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/3712072921":"2026-03-11T13:35:46.531042+0000","192.168.123.105:6800/3334108074":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/2337127528":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3792932241":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1473752177":"2026-03-11T13:35:46.531042+0000","192.168.123.105:0/3043025705":"2026-03-11T13:35:35.615869+0000","192.168.123.105:0/1619687969":"2026-03-11T13:35:35.615869+0000","192.168.123.105:6800/1920070151":"2026-03-11T13:35:35.615869+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.371+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 >> v1:192.168.123.105:6800/3845654103 conn(0x7f63600787c0 legacy=0x7f636007ac60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.371+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 >> v1:192.168.123.105:6789/0 conn(0x7f636c0aa780 legacy=0x7f636c015750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.372+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 shutdown_connections 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.372+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 >> 192.168.123.105:0/1203472867 conn(0x7f636c01a440 msgr2=0x7f636c00b950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.372+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 shutdown_connections 2026-03-10T13:37:57.372 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:57.372+0000 7f637ac20640 1 -- 192.168.123.105:0/1203472867 wait complete. 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.0 flush_pg_stats 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.1 flush_pg_stats 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.2 flush_pg_stats 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.3 flush_pg_stats 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.4 flush_pg_stats 2026-03-10T13:37:57.529 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.5 flush_pg_stats 2026-03-10T13:37:57.530 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.6 flush_pg_stats 2026-03-10T13:37:57.530 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph tell osd.7 flush_pg_stats 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: pgmap v113: 132 pgs: 109 active+clean, 16 creating+peering, 7 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1313933392' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: Deploying daemon prometheus.a on vm09 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1203472867' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.109:0/3725841080' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Started the configuration object watcher 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Checking for config object changes every 1s 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Processing osd blocklist entries for this node 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Reading the configuration object to update local LIO configuration 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Configuration does not have an entry for this host(vm09.local) - nothing to define to LIO 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-10T13:37:57.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: * Environment: production 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: Use a production WSGI server instead. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: * Debug mode: off 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug * Running on all addresses. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: * Running on all addresses. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T13:37:57.675 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:37:57 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T13:37:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: pgmap v113: 132 pgs: 109 active+clean, 16 creating+peering, 7 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1313933392' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: Deploying daemon prometheus.a on vm09 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1203472867' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.109:0/3725841080' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: Deploying daemon iscsi.iscsi.a on vm09 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: pgmap v113: 132 pgs: 109 active+clean, 16 creating+peering, 7 unknown; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1313933392' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: Deploying daemon prometheus.a on vm09 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1203472867' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:37:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.109:0/3725841080' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T13:37:58.219 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.226 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.228 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.266 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.268 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.271 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.350 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.416 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:58.801 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.800+0000 7f87e722d640 1 -- 192.168.123.105:0/2204852679 >> v1:192.168.123.105:6790/0 conn(0x7f87e0074230 legacy=0x7f87e0074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.801+0000 7f9b34cb0640 1 -- 192.168.123.105:0/4224940514 >> v1:192.168.123.105:6789/0 conn(0x7f9b30074230 legacy=0x7f9b30074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.801+0000 7f9b34cb0640 1 -- 192.168.123.105:0/4224940514 shutdown_connections 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.801+0000 7f9b34cb0640 1 -- 192.168.123.105:0/4224940514 >> 192.168.123.105:0/4224940514 conn(0x7f9b3006e900 msgr2=0x7f9b3006ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.801+0000 7f9b34cb0640 1 -- 192.168.123.105:0/4224940514 shutdown_connections 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.801+0000 7f9b34cb0640 1 -- 192.168.123.105:0/4224940514 wait complete. 2026-03-10T13:37:58.802 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.802+0000 7f9b34cb0640 1 Processor -- start 2026-03-10T13:37:58.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.802+0000 7f9b34cb0640 1 -- start start 2026-03-10T13:37:58.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.802+0000 7f9b34cb0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9b301b8610 con 0x7f9b301b4b00 2026-03-10T13:37:58.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.802+0000 7f9b34cb0640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9b301b97f0 con 0x7f9b3011a770 2026-03-10T13:37:58.803 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.802+0000 7f9b34cb0640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f9b301ba9d0 con 0x7f9b3011e280 2026-03-10T13:37:58.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 -- 192.168.123.105:0/2204852679 shutdown_connections 2026-03-10T13:37:58.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 -- 192.168.123.105:0/2204852679 >> 192.168.123.105:0/2204852679 conn(0x7f87e006e900 msgr2=0x7f87e006ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.804 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 -- 192.168.123.105:0/2204852679 shutdown_connections 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 -- 192.168.123.105:0/2204852679 wait complete. 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 Processor -- start 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.804+0000 7f87e722d640 1 -- start start 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.805+0000 7f87e722d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f87e010e500 con 0x7f87e0074230 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.805+0000 7f87e722d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f87e010e6d0 con 0x7f87e011e280 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.805+0000 7f87e722d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f87e01c37f0 con 0x7f87e011a770 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.805+0000 7f87e57a3640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f87e011e280 0x7f87e0119d70 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41904/0 (socket says 192.168.123.105:41904) 2026-03-10T13:37:58.805 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.805+0000 7f87e57a3640 1 -- 192.168.123.105:0/3525225858 learned_addr learned my addr 192.168.123.105:0/3525225858 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.806 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.806+0000 7f9b2dd74640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f9b3011e280 0x7f9b301b33e0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58772/0 (socket says 192.168.123.105:58772) 2026-03-10T13:37:58.806 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.806+0000 7f9b2dd74640 1 -- 192.168.123.105:0/956979605 learned_addr learned my addr 192.168.123.105:0/956979605 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.806 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.806+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2703677010 0 0) 0x7f87e010e6d0 con 0x7f87e011e280 2026-03-10T13:37:58.807 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.806+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2301625310 0 0) 0x7f9b301b8610 con 0x7f9b301b4b00 2026-03-10T13:37:58.807 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.807+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f87b0003620 con 0x7f87e011e280 2026-03-10T13:37:58.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3567900669 0 0) 0x7f87e010e500 con 0x7f87e0074230 2026-03-10T13:37:58.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f87e010e6d0 con 0x7f87e0074230 2026-03-10T13:37:58.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9b08003620 con 0x7f9b301b4b00 2026-03-10T13:37:58.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2466642243 0 0) 0x7f87e01c37f0 con 0x7f87e011a770 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f87e010e500 con 0x7f87e011a770 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.808+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 98965372 0 0) 0x7f9b301b97f0 con 0x7f9b3011a770 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9b301b8610 con 0x7f9b3011a770 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 7622380 0 0) 0x7f9b301ba9d0 con 0x7f9b3011e280 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2170259168 0 0) 0x7f87b0003620 con 0x7f87e011e280 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f87e01c37f0 con 0x7f87e011e280 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9b301b97f0 con 0x7f9b3011e280 2026-03-10T13:37:58.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1467082736 0 0) 0x7f87e010e6d0 con 0x7f87e0074230 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f87b0003620 con 0x7f87e0074230 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2529215778 0 0) 0x7f9b08003620 con 0x7f9b301b4b00 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.809+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9b301ba9d0 con 0x7f9b301b4b00 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3952239467 0 0) 0x7f87e010e500 con 0x7f87e011a770 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 218780545 0 0) 0x7f9b301b8610 con 0x7f9b3011a770 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9b08003620 con 0x7f9b3011a770 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f87e010e6d0 con 0x7f87e011a770 2026-03-10T13:37:58.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f87d8003450 con 0x7f87e011e280 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1647238159 0 0) 0x7f9b301b97f0 con 0x7f9b3011e280 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f9b301b8610 con 0x7f9b3011e280 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9b280032b0 con 0x7f9b301b4b00 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9b20003110 con 0x7f9b3011a770 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9b18003120 con 0x7f9b3011e280 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2874577781 0 0) 0x7f9b301ba9d0 con 0x7f9b301b4b00 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.810+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 >> v1:192.168.123.105:6790/0 conn(0x7f9b3011e280 legacy=0x7f9b301b33e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.811+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f87d0002c40 con 0x7f87e0074230 2026-03-10T13:37:58.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.811+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f87c80030c0 con 0x7f87e011a770 2026-03-10T13:37:58.812 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.812+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1175277887 0 0) 0x7f87e01c37f0 con 0x7f87e011e280 2026-03-10T13:37:58.812 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.812+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 >> v1:192.168.123.109:6789/0 conn(0x7f9b3011a770 legacy=0x7f9b3010dd20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.812 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.812+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 >> v1:192.168.123.105:6790/0 conn(0x7f87e011a770 legacy=0x7f87e010bc40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.812 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.812+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9b301bbbb0 con 0x7f9b301b4b00 2026-03-10T13:37:58.813 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.812+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 >> v1:192.168.123.105:6789/0 conn(0x7f87e0074230 legacy=0x7f87e010b530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.813+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f9b301bac00 con 0x7f9b301b4b00 2026-03-10T13:37:58.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.813+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f9b301bb1e0 con 0x7f9b301b4b00 2026-03-10T13:37:58.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.814+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9b28003d10 con 0x7f9b301b4b00 2026-03-10T13:37:58.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.814+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9b28004b90 con 0x7f9b301b4b00 2026-03-10T13:37:58.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.815+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f9b28004e10 con 0x7f9b301b4b00 2026-03-10T13:37:58.816 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.816+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f87e01c49d0 con 0x7f87e011e280 2026-03-10T13:37:58.817 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.816+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f87e01c39c0 con 0x7f87e011e280 2026-03-10T13:37:58.819 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.816+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f87e01c3f50 con 0x7f87e011e280 2026-03-10T13:37:58.819 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.818+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f87d8003ec0 con 0x7f87e011e280 2026-03-10T13:37:58.819 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.818+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f87d8005f40 con 0x7f87e011e280 2026-03-10T13:37:58.820 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.819+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.109:6789/0 -- mon_get_version(what=osdmap handle=1) -- 0x7f87a8000f80 con 0x7f87e011e280 2026-03-10T13:37:58.821 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.820+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f87d80061a0 con 0x7f87e011e280 2026-03-10T13:37:58.823 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.823+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f87d8095f50 con 0x7f87e011e280 2026-03-10T13:37:58.824 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.823+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f9b28094e90 con 0x7f9b301b4b00 2026-03-10T13:37:58.825 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.824+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6813/693788844 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f87b0086cd0 con 0x7f87b0083000 2026-03-10T13:37:58.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.825+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_get_version_reply(handle=1 version=58) ==== 24+0+0 (unknown 2701689922 0 0) 0x7f87d8096350 con 0x7f87e011e280 2026-03-10T13:37:58.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.825+0000 7f9b1d7fa640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6805/1936282018 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f9af40053a0 con 0x7f9af4001630 2026-03-10T13:37:58.826 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.826+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== osd.3 v1:192.168.123.105:6813/693788844 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f87b0086cd0 con 0x7f87b0083000 2026-03-10T13:37:58.830 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.830+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== osd.1 v1:192.168.123.105:6805/1936282018 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f9af40053a0 con 0x7f9af4001630 2026-03-10T13:37:58.858 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.857+0000 7f9b1d7fa640 1 -- 192.168.123.105:0/956979605 --> v1:192.168.123.105:6805/1936282018 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f9af4007130 con 0x7f9af4001630 2026-03-10T13:37:58.859 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.858+0000 7f9b1f7fe640 1 -- 192.168.123.105:0/956979605 <== osd.1 v1:192.168.123.105:6805/1936282018 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 3669848312) 0x7f9af4007130 con 0x7f9af4001630 2026-03-10T13:37:58.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.859+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/2383916245 >> v1:192.168.123.105:6790/0 conn(0x7fe9d4074230 legacy=0x7fe9d4074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7fb893ff0640 1 -- 192.168.123.105:0/2404018534 >> v1:192.168.123.105:6789/0 conn(0x7fb88c11a770 legacy=0x7fb88c11cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.860 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 >> v1:192.168.123.105:6805/1936282018 conn(0x7f9af4001630 legacy=0x7f9af4003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/2383916245 shutdown_connections 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 >> v1:192.168.123.105:6800/3845654103 conn(0x7f9b08078a40 legacy=0x7f9b0807af00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 >> v1:192.168.123.105:6789/0 conn(0x7f9b301b4b00 legacy=0x7f9b301b6ef0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.860+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/2383916245 >> 192.168.123.105:0/2383916245 conn(0x7fe9d406e900 msgr2=0x7fe9d406ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/2383916245 shutdown_connections 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7fb893ff0640 1 -- 192.168.123.105:0/2404018534 shutdown_connections 2026-03-10T13:37:58.861 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7fb893ff0640 1 -- 192.168.123.105:0/2404018534 >> 192.168.123.105:0/2404018534 conn(0x7fb88c06e900 msgr2=0x7fb88c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7f9b2ed76640 1 -- 192.168.123.105:0/956979605 reap_dead start 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7fb893ff0640 1 -- 192.168.123.105:0/2404018534 shutdown_connections 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 shutdown_connections 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 >> 192.168.123.105:0/956979605 conn(0x7f9b3006e900 msgr2=0x7f9b3010f9a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 shutdown_connections 2026-03-10T13:37:58.862 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7f9b34cb0640 1 -- 192.168.123.105:0/956979605 wait complete. 2026-03-10T13:37:58.863 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7fd1d54c8640 1 -- 192.168.123.105:0/447014465 >> v1:192.168.123.105:6789/0 conn(0x7fd1d011a770 legacy=0x7fd1d011cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.863 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.861+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/2383916245 wait complete. 2026-03-10T13:37:58.863 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fe9dc2d1640 1 Processor -- start 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fe9dc2d1640 1 -- start start 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.862+0000 7fb893ff0640 1 -- 192.168.123.105:0/2404018534 wait complete. 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fb893ff0640 1 Processor -- start 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb893ff0640 1 -- start start 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fe9dc2d1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe9d41c12b0 con 0x7fe9d411e280 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fe9dc2d1640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe9d41c2490 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.864 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.863+0000 7fe9dc2d1640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fe9d41c3670 con 0x7fe9d411a770 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fe9da847640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fe9d41bd7a0 0x7fe9d41bfb90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41970/0 (socket says 192.168.123.105:41970) 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb893ff0640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb88c1c1090 con 0x7fb88c074230 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb893ff0640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb88c1c2270 con 0x7fb88c11e280 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fd1d54c8640 1 -- 192.168.123.105:0/447014465 shutdown_connections 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fd1d54c8640 1 -- 192.168.123.105:0/447014465 >> 192.168.123.105:0/447014465 conn(0x7fd1d006d560 msgr2=0x7fd1d006d970 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb893ff0640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fb88c1c3470 con 0x7fb88c11a770 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb892566640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fb88c11e280 0x7fb88c1bf7b0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41956/0 (socket says 192.168.123.105:41956) 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fb892566640 1 -- 192.168.123.105:0/837192689 learned_addr learned my addr 192.168.123.105:0/837192689 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.865 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.865+0000 7fd1d54c8640 1 -- 192.168.123.105:0/447014465 shutdown_connections 2026-03-10T13:37:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.865+0000 7fd1d54c8640 1 -- 192.168.123.105:0/447014465 wait complete. 2026-03-10T13:37:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.866+0000 7fd1d54c8640 1 Processor -- start 2026-03-10T13:37:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.864+0000 7fe9da847640 1 -- 192.168.123.105:0/166516373 learned_addr learned my addr 192.168.123.105:0/166516373 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.866 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.866+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4061342795 0 0) 0x7fb88c1c2270 con 0x7fb88c11e280 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2754482911 0 0) 0x7fe9d41c12b0 con 0x7fe9d411e280 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb860003620 con 0x7fb88c11e280 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4145049933 0 0) 0x7fb88c1c1090 con 0x7fb88c074230 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe9b0003620 con 0x7fe9d411e280 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb88c1c2270 con 0x7fb88c074230 2026-03-10T13:37:58.867 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2665285569 0 0) 0x7fe9d41c2490 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4270021118 0 0) 0x7fb88c1c3470 con 0x7fb88c11a770 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fb88c1c1090 con 0x7fb88c11a770 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.866+0000 7fd1d54c8640 1 -- start start 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.867+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe9d41c12b0 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 96708911 0 0) 0x7fb860003620 con 0x7fb88c11e280 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fd1d54c8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd1d01c0ea0 con 0x7fd1d011e280 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fd1d54c8640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd1d01c2080 con 0x7fd1d011a770 2026-03-10T13:37:58.868 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fd1d54c8640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd1d01c3260 con 0x7fd1d0074040 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 862699416 0 0) 0x7fe9d41c3670 con 0x7fe9d411a770 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fe9d41c2490 con 0x7fe9d411a770 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb88c1c3470 con 0x7fb88c11e280 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1139503207 0 0) 0x7fb88c1c2270 con 0x7fb88c074230 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb860003620 con 0x7fb88c074230 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2670203982 0 0) 0x7fe9b0003620 con 0x7fe9d411e280 2026-03-10T13:37:58.869 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe9d41c3670 con 0x7fe9d411e280 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1768291523 0 0) 0x7fb88c1c1090 con 0x7fb88c11a770 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7f8089918640 1 -- 192.168.123.105:0/2327485222 >> v1:192.168.123.109:6789/0 conn(0x7f8084074230 legacy=0x7f8084074610 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7f8089918640 1 -- 192.168.123.105:0/2327485222 shutdown_connections 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7f8089918640 1 -- 192.168.123.105:0/2327485222 >> 192.168.123.105:0/2327485222 conn(0x7f808406e900 msgr2=0x7f808406ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.869+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3570105282 0 0) 0x7fe9d41c12b0 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fb88c1c2270 con 0x7fb88c11a770 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fd1cf7fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fd1d011a770 0x7fd1d0118d90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41984/0 (socket says 192.168.123.105:41984) 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.868+0000 7fd1cf7fe640 1 -- 192.168.123.105:0/2879678431 learned_addr learned my addr 192.168.123.105:0/2879678431 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb8840033f0 con 0x7fb88c11e280 2026-03-10T13:37:58.870 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe9b0003620 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb87c003040 con 0x7fb88c074230 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7f8089918640 1 -- 192.168.123.105:0/2327485222 shutdown_connections 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 4242740216 0 0) 0x7fe9d41c2490 con 0x7fe9d411a770 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fe9d41c12b0 con 0x7fe9d411a770 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb880003150 con 0x7fb88c11a770 2026-03-10T13:37:58.871 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4141783581 0 0) 0x7fd1d01c2080 con 0x7fd1d011a770 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd1a0003620 con 0x7fd1d011a770 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2980795378 0 0) 0x7fd1d01c0ea0 con 0x7fd1d011e280 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd1d01c2080 con 0x7fd1d011e280 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1008695034 0 0) 0x7fd1d01c3260 con 0x7fd1d0074040 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe9c8002f90 con 0x7fe9d411e280 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.870+0000 7f8089918640 1 -- 192.168.123.105:0/2327485222 wait complete. 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7f8089918640 1 Processor -- start 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2340707717 0 0) 0x7fb88c1c3470 con 0x7fb88c11e280 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe9d0003450 con 0x7fe9d41bd7a0 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 >> v1:192.168.123.105:6790/0 conn(0x7fb88c11a770 legacy=0x7fb88c118db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.872 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 >> v1:192.168.123.105:6789/0 conn(0x7fb88c074230 legacy=0x7fb88c10e2a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe9c40031b0 con 0x7fe9d411a770 2026-03-10T13:37:58.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb88c1c4670 con 0x7fb88c11e280 2026-03-10T13:37:58.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4292526281 0 0) 0x7fe9d41c3670 con 0x7fe9d411e280 2026-03-10T13:37:58.873 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 >> v1:192.168.123.105:6790/0 conn(0x7fe9d411a770 legacy=0x7fe9d410e0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fb88c1c12c0 con 0x7fb88c11e280 2026-03-10T13:37:58.874 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 >> v1:192.168.123.109:6789/0 conn(0x7fe9d41bd7a0 legacy=0x7fe9d41bfb90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe9d41c4850 con 0x7fe9d411e280 2026-03-10T13:37:58.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.871+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd1d01c0ea0 con 0x7fd1d0074040 2026-03-10T13:37:58.877 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3140165377 0 0) 0x7fd1a0003620 con 0x7fd1d011a770 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd1d01c3260 con 0x7fd1d011a770 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 709814902 0 0) 0x7fd1d01c2080 con 0x7fd1d011e280 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd1a0003620 con 0x7fd1d011e280 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 721906260 0 0) 0x7fd1d01c0ea0 con 0x7fd1d0074040 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd1d01c2080 con 0x7fd1d0074040 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd1c0003b30 con 0x7fd1d011a770 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.873+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fb88c1c1780 con 0x7fb88c11e280 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.872+0000 7f8089918640 1 -- start start 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fb884003ee0 con 0x7fb88c11e280 2026-03-10T13:37:58.878 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fb884005f00 con 0x7fb88c11e280 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7f8089918640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80841b8530 con 0x7f808410ba30 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7f8089918640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80841b9730 con 0x7f808411e280 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.878+0000 7f8089918640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f80841ba930 con 0x7f808411a770 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd1c4002fb0 con 0x7fd1d011e280 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd1b8002fc0 con 0x7fd1d0074040 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2458786988 0 0) 0x7fd1d01c3260 con 0x7fd1d011a770 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 >> v1:192.168.123.105:6790/0 conn(0x7fd1d0074040 legacy=0x7fd1d010f170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7f80827fc640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f808411e280 0x7f80841b3500 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:41992/0 (socket says 192.168.123.105:41992) 2026-03-10T13:37:58.879 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7f80827fc640 1 -- 192.168.123.105:0/3879799599 learned_addr learned my addr 192.168.123.105:0/3879799599 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fe9d41c14e0 con 0x7fe9d411e280 2026-03-10T13:37:58.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.877+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fe9d41c1a20 con 0x7fe9d411e280 2026-03-10T13:37:58.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.879+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 >> v1:192.168.123.105:6789/0 conn(0x7fd1d011e280 legacy=0x7fd1d01bf780 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.880 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.880+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fe9c8003130 con 0x7fe9d411e280 2026-03-10T13:37:58.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.880+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fe9c8004820 con 0x7fe9d411e280 2026-03-10T13:37:58.881 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.880+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd1d01c4440 con 0x7fd1d011a770 2026-03-10T13:37:58.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.881+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 --> v1:192.168.123.105:6813/693788844 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f87a8002d70 con 0x7f87b0083000 2026-03-10T13:37:58.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.882+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6789/0 -- mon_get_version(what=osdmap handle=1) -- 0x7fb854000f80 con 0x7fb88c11e280 2026-03-10T13:37:58.882 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.880+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd1d01c10d0 con 0x7fd1d011a770 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.880+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd1d01c1590 con 0x7fd1d011a770 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.882+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd1c00042c0 con 0x7fd1d011a770 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.882+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd1c0005130 con 0x7fd1d011a770 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.883+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7fd1c00053b0 con 0x7fd1d011a770 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.883+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1312679277 0 0) 0x7f80841b8530 con 0x7f808410ba30 2026-03-10T13:37:58.883 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.883+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f804c003620 con 0x7f808410ba30 2026-03-10T13:37:58.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.883+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7fe9c8005ad0 con 0x7fe9d411e280 2026-03-10T13:37:58.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.883+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1871555956 0 0) 0x7f80841ba930 con 0x7f808411a770 2026-03-10T13:37:58.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.884+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f80841b8530 con 0x7f808411a770 2026-03-10T13:37:58.884 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.884+0000 7f87d5ffb640 1 -- 192.168.123.105:0/3525225858 <== osd.3 v1:192.168.123.105:6813/693788844 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 1965574230) 0x7f87a8002d70 con 0x7f87b0083000 2026-03-10T13:37:58.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.884+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3094784355 0 0) 0x7f80841b9730 con 0x7f808411e280 2026-03-10T13:37:58.885 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.884+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f80841ba930 con 0x7f808411e280 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.885+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1201965777 0 0) 0x7f804c003620 con 0x7f808410ba30 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.885+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f80841b9730 con 0x7f808410ba30 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.885+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 699599933 0 0) 0x7f80841b8530 con 0x7f808411a770 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.885+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7fd1c00953c0 con 0x7fd1d011a770 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.885+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f804c003620 con 0x7f808411a770 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2322558886 0 0) 0x7f80841ba930 con 0x7f808411e280 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f80841b8530 con 0x7f808411e280 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f80740032b0 con 0x7f808410ba30 2026-03-10T13:37:58.886 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8078003df0 con 0x7f808411a770 2026-03-10T13:37:58.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 >> v1:192.168.123.105:6813/693788844 conn(0x7f87b0083000 legacy=0x7f87b0085460 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.886+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7fe9c8094a70 con 0x7fe9d411e280 2026-03-10T13:37:58.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.887+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 >> v1:192.168.123.105:6800/3845654103 conn(0x7f87b0078960 legacy=0x7f87b007ae20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.887 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.887+0000 7fe9c17fa640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.109:6808/354656606 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fe99c0053a0 con 0x7fe99c001630 2026-03-10T13:37:58.888 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6809/3999426341 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fd19c0053a0 con 0x7fd19c001630 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 >> v1:192.168.123.109:6789/0 conn(0x7f87e011e280 legacy=0x7f87e0119d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f806c002fb0 con 0x7f808411e280 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1681410839 0 0) 0x7f80841b9730 con 0x7f808410ba30 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 >> v1:192.168.123.105:6790/0 conn(0x7f808411a770 legacy=0x7f808410b0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 >> v1:192.168.123.109:6789/0 conn(0x7f808411e280 legacy=0x7f80841b3500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f80841bbb30 con 0x7f808410ba30 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== osd.2 v1:192.168.123.105:6809/3999426341 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7fd19c0053a0 con 0x7fd19c001630 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f87e57a3640 1 -- 192.168.123.105:0/3525225858 reap_dead start 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.889+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 shutdown_connections 2026-03-10T13:37:58.889 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.889+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 >> 192.168.123.105:0/3525225858 conn(0x7f87e006e900 msgr2=0x7f87e0072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.890 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.889+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 shutdown_connections 2026-03-10T13:37:58.890 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.889+0000 7f87e722d640 1 -- 192.168.123.105:0/3525225858 wait complete. 2026-03-10T13:37:58.890 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.890+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== osd.6 v1:192.168.123.109:6808/354656606 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7fe99c0053a0 con 0x7fe99c001630 2026-03-10T13:37:58.891 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8089918640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f80841b8760 con 0x7f808410ba30 2026-03-10T13:37:58.891 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.888+0000 7f8089918640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f80841b8cf0 con 0x7f808410ba30 2026-03-10T13:37:58.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.894+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8074003d10 con 0x7f808410ba30 2026-03-10T13:37:58.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.894+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8074004b90 con 0x7f808410ba30 2026-03-10T13:37:58.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.907+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f8074005e60 con 0x7f808410ba30 2026-03-10T13:37:58.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.909+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7fb88401e880 con 0x7fb88c11e280 2026-03-10T13:37:58.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.910+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7fb884095180 con 0x7fb88c11e280 2026-03-10T13:37:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.910+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6800/3898346219 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fb8600868f0 con 0x7fb860082c20 2026-03-10T13:37:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7ff231671640 1 -- 192.168.123.105:0/3141670822 >> v1:192.168.123.109:6789/0 conn(0x7ff22c07ae70 legacy=0x7ff22c07d330 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7ff231671640 1 -- 192.168.123.105:0/3141670822 shutdown_connections 2026-03-10T13:37:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7ff231671640 1 -- 192.168.123.105:0/3141670822 >> 192.168.123.105:0/3141670822 conn(0x7ff22c06e900 msgr2=0x7ff22c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f8074094f70 con 0x7f808410ba30 2026-03-10T13:37:58.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== osd.4 v1:192.168.123.109:6800/3898346219 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7fb8600868f0 con 0x7fb860082c20 2026-03-10T13:37:58.914 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.914+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.109:6804/452558008 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f80500053a0 con 0x7f8050001630 2026-03-10T13:37:58.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.919+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== osd.5 v1:192.168.123.109:6804/452558008 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f80500053a0 con 0x7f8050001630 2026-03-10T13:37:58.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.921+0000 7fe9c17fa640 1 -- 192.168.123.105:0/166516373 --> v1:192.168.123.109:6808/354656606 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fe99c007130 con 0x7fe99c001630 2026-03-10T13:37:58.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7ff231671640 1 -- 192.168.123.105:0/3141670822 shutdown_connections 2026-03-10T13:37:58.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.911+0000 7ff231671640 1 -- 192.168.123.105:0/3141670822 wait complete. 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.918+0000 7ff231671640 1 Processor -- start 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.918+0000 7ff231671640 1 -- start start 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.918+0000 7ff231671640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff22c1c10b0 con 0x7ff22c074230 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.918+0000 7ff231671640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff22c1c22b0 con 0x7ff22c07d650 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.918+0000 7ff231671640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff22c1c34b0 con 0x7ff22c0772b0 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.919+0000 7ff22affd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ff22c074230 0x7ff22c07cce0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60210/0 (socket says 192.168.123.105:60210) 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.919+0000 7ff22affd640 1 -- 192.168.123.105:0/3052065860 learned_addr learned my addr 192.168.123.105:0/3052065860 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:58.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.922+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3147931177 0 0) 0x7ff22c1c34b0 con 0x7ff22c0772b0 2026-03-10T13:37:58.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.922+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff1fc003620 con 0x7ff22c0772b0 2026-03-10T13:37:58.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.923+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3529644563 0 0) 0x7ff22c1c22b0 con 0x7ff22c07d650 2026-03-10T13:37:58.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.923+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff22c1c34b0 con 0x7ff22c07d650 2026-03-10T13:37:58.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.923+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1642479060 0 0) 0x7ff22c1c10b0 con 0x7ff22c074230 2026-03-10T13:37:58.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.924+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff22c1c22b0 con 0x7ff22c074230 2026-03-10T13:37:58.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.924+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1475296864 0 0) 0x7ff1fc003620 con 0x7ff22c0772b0 2026-03-10T13:37:58.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.924+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff22c1c10b0 con 0x7ff22c0772b0 2026-03-10T13:37:58.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.924+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff220003090 con 0x7ff22c0772b0 2026-03-10T13:37:58.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.925+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2062825518 0 0) 0x7ff22c1c34b0 con 0x7ff22c07d650 2026-03-10T13:37:58.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.925+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff1fc003620 con 0x7ff22c07d650 2026-03-10T13:37:58.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.925+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 133854075 0 0) 0x7ff22c1c22b0 con 0x7ff22c074230 2026-03-10T13:37:58.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.925+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff22c1c34b0 con 0x7ff22c074230 2026-03-10T13:37:58.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.931+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 758737099 0 0) 0x7ff22c1c10b0 con 0x7ff22c0772b0 2026-03-10T13:37:58.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.932+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 >> v1:192.168.123.109:6789/0 conn(0x7ff22c07d650 legacy=0x7ff22c07af50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.933+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 >> v1:192.168.123.105:6789/0 conn(0x7ff22c074230 legacy=0x7ff22c07cce0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.933+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff22c1c46b0 con 0x7ff22c0772b0 2026-03-10T13:37:58.935 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.933+0000 7ff231671640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff22c1c36e0 con 0x7ff22c0772b0 2026-03-10T13:37:58.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.935+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_get_version_reply(handle=1 version=58) ==== 24+0+0 (unknown 2701689922 0 0) 0x7fb88405d0f0 con 0x7fb88c11e280 2026-03-10T13:37:58.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.936+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 --> v1:192.168.123.109:6800/3898346219 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fb854002d70 con 0x7fb860082c20 2026-03-10T13:37:58.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.937+0000 7fe9c37fe640 1 -- 192.168.123.105:0/166516373 <== osd.6 v1:192.168.123.109:6808/354656606 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 1566799740) 0x7fe99c007130 con 0x7fe99c001630 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.937+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 >> v1:192.168.123.109:6808/354656606 conn(0x7fe99c001630 legacy=0x7fe99c003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 >> v1:192.168.123.105:6800/3845654103 conn(0x7fe9b00788a0 legacy=0x7fe9b007ad60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 >> v1:192.168.123.105:6789/0 conn(0x7fe9d411e280 legacy=0x7fe9d4118d60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9da847640 1 -- 192.168.123.105:0/166516373 reap_dead start 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 shutdown_connections 2026-03-10T13:37:58.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 >> 192.168.123.105:0/166516373 conn(0x7fe9d406e900 msgr2=0x7fe9d40727e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 shutdown_connections 2026-03-10T13:37:58.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.938+0000 7fe9dc2d1640 1 -- 192.168.123.105:0/166516373 wait complete. 2026-03-10T13:37:58.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.933+0000 7ff231671640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7ff22c1c3bf0 con 0x7ff22c0772b0 2026-03-10T13:37:58.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.939+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff220002c30 con 0x7ff22c0772b0 2026-03-10T13:37:58.939 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.939+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff220004840 con 0x7ff22c0772b0 2026-03-10T13:37:58.940 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.939+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7ff220004b40 con 0x7ff22c0772b0 2026-03-10T13:37:58.941 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.941+0000 7fb87affd640 1 -- 192.168.123.105:0/837192689 <== osd.4 v1:192.168.123.109:6800/3898346219 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 1495652798) 0x7fb854002d70 con 0x7fb860082c20 2026-03-10T13:37:58.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.942+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 --> v1:192.168.123.105:6809/3999426341 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fd19c007130 con 0x7fd19c001630 2026-03-10T13:37:58.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.942+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 >> v1:192.168.123.109:6800/3898346219 conn(0x7fb860082c20 legacy=0x7fb860085080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.943+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 >> v1:192.168.123.105:6800/3845654103 conn(0x7fb860078600 legacy=0x7fb86007aac0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.943+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7ff220094c30 con 0x7ff22c0772b0 2026-03-10T13:37:58.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.943+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.109:6812/3977889858 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7ff1f80053a0 con 0x7ff1f8001630 2026-03-10T13:37:58.944 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.943+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 >> v1:192.168.123.109:6789/0 conn(0x7fb88c11e280 legacy=0x7fb88c1bf7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.945+0000 7fd1cd7fa640 1 -- 192.168.123.105:0/2879678431 <== osd.2 v1:192.168.123.105:6809/3999426341 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 568869803) 0x7fd19c007130 con 0x7fd19c001630 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.945+0000 7fb892566640 1 -- 192.168.123.105:0/837192689 reap_dead start 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.945+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 >> v1:192.168.123.105:6809/3999426341 conn(0x7fd19c001630 legacy=0x7fd19c003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 >> v1:192.168.123.105:6800/3845654103 conn(0x7fd1a0078900 legacy=0x7fd1a007adc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 shutdown_connections 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 >> 192.168.123.105:0/837192689 conn(0x7fb88c06e900 msgr2=0x7fb88c072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.946 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 shutdown_connections 2026-03-10T13:37:58.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 >> v1:192.168.123.109:6789/0 conn(0x7fd1d011a770 legacy=0x7fd1d0118d90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.946+0000 7fd1d4cc7640 1 -- 192.168.123.105:0/2879678431 reap_dead start 2026-03-10T13:37:58.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.947+0000 7fb893ff0640 1 -- 192.168.123.105:0/837192689 wait complete. 2026-03-10T13:37:58.947 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.947+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== osd.7 v1:192.168.123.109:6812/3977889858 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7ff1f80053a0 con 0x7ff1f8001630 2026-03-10T13:37:58.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.951+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 shutdown_connections 2026-03-10T13:37:58.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.951+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 >> 192.168.123.105:0/2879678431 conn(0x7fd1d006d560 msgr2=0x7fd1d0110b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.951+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 shutdown_connections 2026-03-10T13:37:58.951 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.951+0000 7fd1d54c8640 1 -- 192.168.123.105:0/2879678431 wait complete. 2026-03-10T13:37:58.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.972+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 --> v1:192.168.123.109:6804/452558008 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f8050007130 con 0x7f8050001630 2026-03-10T13:37:58.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.974+0000 7f8088916640 1 -- 192.168.123.105:0/3879799599 <== osd.5 v1:192.168.123.109:6804/452558008 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 3242197708) 0x7f8050007130 con 0x7f8050001630 2026-03-10T13:37:58.977 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.976+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 >> v1:192.168.123.109:6804/452558008 conn(0x7f8050001630 legacy=0x7f8050003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.978 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.977+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 >> v1:192.168.123.105:6800/3845654103 conn(0x7f804c0789e0 legacy=0x7f804c07aea0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.978 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.977+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 >> v1:192.168.123.105:6789/0 conn(0x7f808410ba30 legacy=0x7f80841b6c30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:58.978 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.978+0000 7f80837fe640 1 -- 192.168.123.105:0/3879799599 reap_dead start 2026-03-10T13:37:58.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.979+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 shutdown_connections 2026-03-10T13:37:58.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.979+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 >> 192.168.123.105:0/3879799599 conn(0x7f808406e900 msgr2=0x7f8084071c60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:58.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.979+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 shutdown_connections 2026-03-10T13:37:58.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.979+0000 7f80627fc640 1 -- 192.168.123.105:0/3879799599 wait complete. 2026-03-10T13:37:59.007 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.006+0000 7f9121710640 1 -- 192.168.123.105:0/3080220073 >> v1:192.168.123.105:6789/0 conn(0x7f911c0772b0 legacy=0x7f911c079750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.009 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.007+0000 7f9121710640 1 -- 192.168.123.105:0/3080220073 shutdown_connections 2026-03-10T13:37:59.009 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.007+0000 7f9121710640 1 -- 192.168.123.105:0/3080220073 >> 192.168.123.105:0/3080220073 conn(0x7f911c06e900 msgr2=0x7f911c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:59.009 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:58.994+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 --> v1:192.168.123.109:6812/3977889858 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7ff1f8007130 con 0x7ff1f8001630 2026-03-10T13:37:59.009 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.009+0000 7ff20bfff640 1 -- 192.168.123.105:0/3052065860 <== osd.7 v1:192.168.123.109:6812/3977889858 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (unknown 0 0 2892782965) 0x7ff1f8007130 con 0x7ff1f8001630 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.018+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 >> v1:192.168.123.109:6812/3977889858 conn(0x7ff1f8001630 legacy=0x7ff1f8003af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.018+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 >> v1:192.168.123.105:6800/3845654103 conn(0x7ff1fc078990 legacy=0x7ff1fc07ae50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.007+0000 7f9121710640 1 -- 192.168.123.105:0/3080220073 shutdown_connections 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 -- 192.168.123.105:0/3080220073 wait complete. 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 Processor -- start 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 -- start start 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f911c1c1080 con 0x7f911c07ae70 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f911c1c2260 con 0x7f911c10bad0 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.008+0000 7f9121710640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f911c1c3440 con 0x7f911c074230 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.018+0000 7f911affd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f911c074230 0x7f911c10b180 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58908/0 (socket says 192.168.123.105:58908) 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.018+0000 7f911affd640 1 -- 192.168.123.105:0/3057926101 learned_addr learned my addr 192.168.123.105:0/3057926101 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 >> v1:192.168.123.105:6790/0 conn(0x7ff22c0772b0 legacy=0x7ff22c07a770 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.019 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7ff22b7fe640 1 -- 192.168.123.105:0/3052065860 reap_dead start 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 784268965 0 0) 0x7f911c1c3440 con 0x7f911c074230 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f90ec003620 con 0x7f911c074230 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 shutdown_connections 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.019+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 >> 192.168.123.105:0/3052065860 conn(0x7ff22c06e900 msgr2=0x7ff22c076310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 979413044 0 0) 0x7f911c1c1080 con 0x7f911c07ae70 2026-03-10T13:37:59.020 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f911c1c3440 con 0x7f911c07ae70 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 shutdown_connections 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7ff209ffb640 1 -- 192.168.123.105:0/3052065860 wait complete. 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2912437653 0 0) 0x7f911c1c2260 con 0x7f911c10bad0 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f911c1c1080 con 0x7f911c10bad0 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2106271917 0 0) 0x7f90ec003620 con 0x7f911c074230 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f911c1c2260 con 0x7f911c074230 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2107541254 0 0) 0x7f911c1c3440 con 0x7f911c07ae70 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f90ec003620 con 0x7f911c07ae70 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9114003520 con 0x7f911c074230 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f910c0031b0 con 0x7f911c07ae70 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3413665137 0 0) 0x7f911c1c1080 con 0x7f911c10bad0 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f911c1c3440 con 0x7f911c10bad0 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3509779974 0 0) 0x7f911c1c2260 con 0x7f911c074230 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 >> v1:192.168.123.109:6789/0 conn(0x7f911c10bad0 legacy=0x7f911c10e0a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.020+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 >> v1:192.168.123.105:6789/0 conn(0x7f911c07ae70 legacy=0x7f911c10d950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.021 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.021+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f911c1c4620 con 0x7f911c074230 2026-03-10T13:37:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.021+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f911c1c2490 con 0x7f911c074230 2026-03-10T13:37:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.021+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f911c1c2a20 con 0x7f911c074230 2026-03-10T13:37:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.022+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f91140036c0 con 0x7f911c074230 2026-03-10T13:37:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.022+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9114005fd0 con 0x7f911c074230 2026-03-10T13:37:59.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.022+0000 7f90f9ffb640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6790/0 -- mon_get_version(what=osdmap handle=1) -- 0x7f90e8000f80 con 0x7f911c074230 2026-03-10T13:37:59.026 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.026+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f91140054d0 con 0x7f911c074230 2026-03-10T13:37:59.042 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.042+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f9114096270 con 0x7f911c074230 2026-03-10T13:37:59.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.044+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6801/3141950523 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f90ec086ba0 con 0x7f90ec082ed0 2026-03-10T13:37:59.044 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.044+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_get_version_reply(handle=1 version=58) ==== 24+0+0 (unknown 2701689922 0 0) 0x7f9114096640 con 0x7f911c074230 2026-03-10T13:37:59.046 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.045+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== osd.0 v1:192.168.123.105:6801/3141950523 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (unknown 0 0 3365489958) 0x7f90ec086ba0 con 0x7f90ec082ed0 2026-03-10T13:37:59.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.057+0000 7f90f9ffb640 1 -- 192.168.123.105:0/3057926101 --> v1:192.168.123.105:6801/3141950523 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f90e8002cf0 con 0x7f90ec082ed0 2026-03-10T13:37:59.060 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.060+0000 7f90fbfff640 1 -- 192.168.123.105:0/3057926101 <== osd.0 v1:192.168.123.105:6801/3141950523 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (unknown 0 0 1673232951) 0x7f90e8002cf0 con 0x7f90ec082ed0 2026-03-10T13:37:59.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.061+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 >> v1:192.168.123.105:6801/3141950523 conn(0x7f90ec082ed0 legacy=0x7f90ec085330 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.068+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 >> v1:192.168.123.105:6800/3845654103 conn(0x7f90ec078830 legacy=0x7f90ec07acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.068+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 >> v1:192.168.123.105:6790/0 conn(0x7f911c074230 legacy=0x7f911c10b180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.069 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.068+0000 7f911b7fe640 1 -- 192.168.123.105:0/3057926101 reap_dead start 2026-03-10T13:37:59.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.069+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 shutdown_connections 2026-03-10T13:37:59.072 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.069+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 >> 192.168.123.105:0/3057926101 conn(0x7f911c06e900 msgr2=0x7f911c07d330 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:59.074 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.071+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 shutdown_connections 2026-03-10T13:37:59.085 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.082+0000 7f9121710640 1 -- 192.168.123.105:0/3057926101 wait complete. 2026-03-10T13:37:59.125 INFO:teuthology.orchestra.run.vm05.stdout:55834574866 2026-03-10T13:37:59.125 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.1 2026-03-10T13:37:59.168 INFO:teuthology.orchestra.run.vm05.stdout:103079215117 2026-03-10T13:37:59.169 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.3 2026-03-10T13:37:59.187 INFO:teuthology.orchestra.run.vm05.stdout:150323855369 2026-03-10T13:37:59.188 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.5 2026-03-10T13:37:59.289 INFO:teuthology.orchestra.run.vm05.stdout:120259084299 2026-03-10T13:37:59.289 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.4 2026-03-10T13:37:59.292 INFO:teuthology.orchestra.run.vm05.stdout:197568495621 2026-03-10T13:37:59.292 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.7 2026-03-10T13:37:59.308 INFO:teuthology.orchestra.run.vm05.stdout:176093659143 2026-03-10T13:37:59.308 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.6 2026-03-10T13:37:59.318 INFO:teuthology.orchestra.run.vm05.stdout:73014444047 2026-03-10T13:37:59.318 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.2 2026-03-10T13:37:59.331 INFO:teuthology.orchestra.run.vm05.stdout:38654705684 2026-03-10T13:37:59.332 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.0 2026-03-10T13:37:59.697 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[51512]: pgmap v114: 132 pgs: 121 active+clean, 11 creating+peering; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 54 op/s 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[51512]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[58955]: pgmap v114: 132 pgs: 121 active+clean, 11 creating+peering; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 54 op/s 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[58955]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T13:37:59.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:37:59 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:59 vm09 ceph-mon[53367]: pgmap v114: 132 pgs: 121 active+clean, 11 creating+peering; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 54 op/s 2026-03-10T13:37:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:59 vm09 ceph-mon[53367]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T13:37:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:37:59 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:37:59.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.982+0000 7f6016f97640 1 -- 192.168.123.105:0/2214250784 >> v1:192.168.123.105:6789/0 conn(0x7f6010075ff0 legacy=0x7f601010f8c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.982+0000 7f6016f97640 1 -- 192.168.123.105:0/2214250784 shutdown_connections 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.982+0000 7f6016f97640 1 -- 192.168.123.105:0/2214250784 >> 192.168.123.105:0/2214250784 conn(0x7f60100fe1b0 msgr2=0x7f60101005d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.982+0000 7f6016f97640 1 -- 192.168.123.105:0/2214250784 shutdown_connections 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.982+0000 7f6016f97640 1 -- 192.168.123.105:0/2214250784 wait complete. 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6016f97640 1 Processor -- start 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6016f97640 1 -- start start 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6016f97640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f60101ab760 con 0x7f60101111a0 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6016f97640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f60101ac960 con 0x7f601010d0c0 2026-03-10T13:37:59.984 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6016f97640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f60101adb60 con 0x7f6010075410 2026-03-10T13:37:59.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6015794640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f60101111a0 0x7f60101a66f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60252/0 (socket says 192.168.123.105:60252) 2026-03-10T13:37:59.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.984+0000 7f6015794640 1 -- 192.168.123.105:0/3587733294 learned_addr learned my addr 192.168.123.105:0/3587733294 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:37:59.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.985+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1018965945 0 0) 0x7f60101ab760 con 0x7f60101111a0 2026-03-10T13:37:59.985 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.985+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5fec003620 con 0x7f60101111a0 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.985+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1280593607 0 0) 0x7f5fec003620 con 0x7f60101111a0 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.985+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f60101ab760 con 0x7f60101111a0 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.985+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f600c0032e0 con 0x7f60101111a0 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.986+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3285965577 0 0) 0x7f60101ab760 con 0x7f60101111a0 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.986+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 >> v1:192.168.123.105:6790/0 conn(0x7f6010075410 legacy=0x7f601010c700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:37:59.987 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.987+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 >> v1:192.168.123.109:6789/0 conn(0x7f601010d0c0 legacy=0x7f60101a9e60 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:37:59.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.987+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f60101aed60 con 0x7f60101111a0 2026-03-10T13:37:59.988 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.988+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f60101add90 con 0x7f60101111a0 2026-03-10T13:37:59.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.988+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f60101ae2f0 con 0x7f60101111a0 2026-03-10T13:37:59.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.988+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6010076b50 con 0x7f60101111a0 2026-03-10T13:37:59.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.992+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f600c0038b0 con 0x7f60101111a0 2026-03-10T13:37:59.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.992+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f600c0050a0 con 0x7f60101111a0 2026-03-10T13:37:59.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.992+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f600c005320 con 0x7f60101111a0 2026-03-10T13:38:00.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.996+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f600c003e80 con 0x7f60101111a0 2026-03-10T13:38:00.000 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:37:59.996+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f600c0042d0 con 0x7f60101111a0 2026-03-10T13:38:00.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.130+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f6010116fd0 con 0x7f60101111a0 2026-03-10T13:38:00.131 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.131+0000 7f5ffeffd640 1 -- 192.168.123.105:0/3587733294 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (unknown 832126871 0 644352619) 0x7f600c05f230 con 0x7f60101111a0 2026-03-10T13:38:00.135 INFO:teuthology.orchestra.run.vm05.stdout:55834574865 2026-03-10T13:38:00.136 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.137+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 >> v1:192.168.123.105:6800/3845654103 conn(0x7f5fec0781b0 legacy=0x7f5fec07a670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.137 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.137+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 >> v1:192.168.123.105:6789/0 conn(0x7f60101111a0 legacy=0x7f60101a66f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.137+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 shutdown_connections 2026-03-10T13:38:00.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.137+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 >> 192.168.123.105:0/3587733294 conn(0x7f60100fe1b0 msgr2=0x7f6010113660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.137+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 shutdown_connections 2026-03-10T13:38:00.138 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.138+0000 7f6016f97640 1 -- 192.168.123.105:0/3587733294 wait complete. 2026-03-10T13:38:00.325 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.334 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.339 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.339 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.354+0000 7fa132143640 1 -- 192.168.123.105:0/3288771703 >> v1:192.168.123.105:6789/0 conn(0x7fa12c0772b0 legacy=0x7fa12c079750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.354+0000 7fa132143640 1 -- 192.168.123.105:0/3288771703 shutdown_connections 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.354+0000 7fa132143640 1 -- 192.168.123.105:0/3288771703 >> 192.168.123.105:0/3288771703 conn(0x7fa12c06e900 msgr2=0x7fa12c06ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.354+0000 7fa132143640 1 -- 192.168.123.105:0/3288771703 shutdown_connections 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.354+0000 7fa132143640 1 -- 192.168.123.105:0/3288771703 wait complete. 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa132143640 1 Processor -- start 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa132143640 1 -- start start 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa132143640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa12c13a250 con 0x7fa12c136490 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa132143640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa12c13b450 con 0x7fa12c07ae70 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa132143640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fa12c13c650 con 0x7fa12c074230 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa12b7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fa12c074230 0x7fa12c10b040 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58934/0 (socket says 192.168.123.105:58934) 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.355+0000 7fa12b7fe640 1 -- 192.168.123.105:0/1535815681 learned_addr learned my addr 192.168.123.105:0/1535815681 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.356+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3118478408 0 0) 0x7fa12c13a250 con 0x7fa12c136490 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.356+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa104003620 con 0x7fa12c136490 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1003541495 0 0) 0x7fa12c13c650 con 0x7fa12c074230 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa12c13a250 con 0x7fa12c074230 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1558784891 0 0) 0x7fa12c13b450 con 0x7fa12c07ae70 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fa12c13c650 con 0x7fa12c07ae70 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2433645579 0 0) 0x7fa104003620 con 0x7fa12c136490 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fa12c13b450 con 0x7fa12c136490 2026-03-10T13:38:00.368 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.361+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fa120002ed0 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3918702876 0 0) 0x7fa12c13b450 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 >> v1:192.168.123.105:6790/0 conn(0x7fa12c074230 legacy=0x7fa12c10b040 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 >> v1:192.168.123.109:6789/0 conn(0x7fa12c07ae70 legacy=0x7fa12c10b770 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa12c13d850 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa132143640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fa12c13a480 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa132143640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fa12c13aab0 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.362+0000 7fa132143640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa12c10a470 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.366+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fa120003930 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.366+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fa120004800 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.367+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7fa12001d160 con 0x7fa12c136490 2026-03-10T13:38:00.369 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.367+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7fa120094b80 con 0x7fa12c136490 2026-03-10T13:38:00.383 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.370+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fa12005daf0 con 0x7fa12c136490 2026-03-10T13:38:00.389 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574866 got 55834574865 for osd.1 2026-03-10T13:38:00.449 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.497 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.495+0000 7fa132143640 1 -- 192.168.123.105:0/1535815681 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7fa12c10c250 con 0x7fa12c136490 2026-03-10T13:38:00.500 INFO:teuthology.orchestra.run.vm05.stdout:150323855368 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.497+0000 7fa128ff9640 1 -- 192.168.123.105:0/1535815681 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (unknown 2131975755 0 4132378969) 0x7fa12c10c250 con 0x7fa12c136490 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.499+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 >> v1:192.168.123.105:6800/3845654103 conn(0x7fa1040781a0 legacy=0x7fa10407a660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.500+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 >> v1:192.168.123.105:6789/0 conn(0x7fa12c136490 legacy=0x7fa12c138950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.500+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 shutdown_connections 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.500+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 >> 192.168.123.105:0/1535815681 conn(0x7fa12c06e900 msgr2=0x7fa12c10f2f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.500+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 shutdown_connections 2026-03-10T13:38:00.501 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.500+0000 7fa10a7fc640 1 -- 192.168.123.105:0/1535815681 wait complete. 2026-03-10T13:38:00.517 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:00.821 INFO:tasks.cephadm.ceph_manager.ceph:need seq 150323855369 got 150323855368 for osd.5 2026-03-10T13:38:00.901 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3587733294' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:00.901 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1535815681' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:00.903 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3587733294' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:00.903 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1535815681' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:00.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.900+0000 7faad03d4640 1 -- 192.168.123.105:0/3650679932 >> v1:192.168.123.109:6789/0 conn(0x7faac810fdc0 legacy=0x7faac8110270 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.902+0000 7faad03d4640 1 -- 192.168.123.105:0/3650679932 shutdown_connections 2026-03-10T13:38:00.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.902+0000 7faad03d4640 1 -- 192.168.123.105:0/3650679932 >> 192.168.123.105:0/3650679932 conn(0x7faac806d730 msgr2=0x7faac806db40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.903 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.903+0000 7faad03d4640 1 -- 192.168.123.105:0/3650679932 shutdown_connections 2026-03-10T13:38:00.904 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.903+0000 7faad03d4640 1 -- 192.168.123.105:0/3650679932 wait complete. 2026-03-10T13:38:00.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.903+0000 7faad03d4640 1 Processor -- start 2026-03-10T13:38:00.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faad03d4640 1 -- start start 2026-03-10T13:38:00.907 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faad03d4640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7faac811ded0 con 0x7faac811e790 2026-03-10T13:38:00.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faad03d4640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7faac811e0a0 con 0x7faac8074250 2026-03-10T13:38:00.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faad03d4640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7faac811e270 con 0x7faac810fdc0 2026-03-10T13:38:00.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faace149640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7faac8074250 0x7faac8117390 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:42084/0 (socket says 192.168.123.105:42084) 2026-03-10T13:38:00.908 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.907+0000 7faace149640 1 -- 192.168.123.105:0/488296344 learned_addr learned my addr 192.168.123.105:0/488296344 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.910+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3616969889 0 0) 0x7faac811e0a0 con 0x7faac8074250 2026-03-10T13:38:00.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.910+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7faaa8003620 con 0x7faac8074250 2026-03-10T13:38:00.910 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.910+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 530796324 0 0) 0x7faac811e270 con 0x7faac810fdc0 2026-03-10T13:38:00.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.910+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7faac811e0a0 con 0x7faac810fdc0 2026-03-10T13:38:00.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2901303107 0 0) 0x7faac811ded0 con 0x7faac811e790 2026-03-10T13:38:00.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7faac811e270 con 0x7faac811e790 2026-03-10T13:38:00.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 146630337 0 0) 0x7faaa8003620 con 0x7faac8074250 2026-03-10T13:38:00.911 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7faac811ded0 con 0x7faac8074250 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1872383398 0 0) 0x7faac811e0a0 con 0x7faac810fdc0 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.911+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7faaa8003620 con 0x7faac810fdc0 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.912+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1676977002 0 0) 0x7faac811e270 con 0x7faac811e790 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.912+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7faac811e0a0 con 0x7faac811e790 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.912+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7faac0003560 con 0x7faac8074250 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.912+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7faabc002fd0 con 0x7faac810fdc0 2026-03-10T13:38:00.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.912+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7faab8002ef0 con 0x7faac811e790 2026-03-10T13:38:00.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.915+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 517294170 0 0) 0x7faac811ded0 con 0x7faac8074250 2026-03-10T13:38:00.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.915+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 >> v1:192.168.123.105:6790/0 conn(0x7faac810fdc0 legacy=0x7faac811d0b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.915+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 >> v1:192.168.123.105:6789/0 conn(0x7faac811e790 legacy=0x7faac811d7c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.916+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faac811e440 con 0x7faac8074250 2026-03-10T13:38:00.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.916+0000 7faad03d4640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7faac81c19d0 con 0x7faac8074250 2026-03-10T13:38:00.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.916+0000 7faad03d4640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7faac81c1e80 con 0x7faac8074250 2026-03-10T13:38:00.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.917+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7faac00038a0 con 0x7faac8074250 2026-03-10T13:38:00.917 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.917+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7faac0006070 con 0x7faac8074250 2026-03-10T13:38:00.918 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.918+0000 7faad03d4640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faa98005180 con 0x7faac8074250 2026-03-10T13:38:00.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.918+0000 7f18eee46640 1 -- 192.168.123.105:0/1419631558 >> v1:192.168.123.109:6789/0 conn(0x7f18e80772b0 legacy=0x7f18e8079750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- 192.168.123.105:0/1419631558 shutdown_connections 2026-03-10T13:38:00.919 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- 192.168.123.105:0/1419631558 >> 192.168.123.105:0/1419631558 conn(0x7f18e806e900 msgr2=0x7f18e806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.920 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.920+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7faac0003d40 con 0x7faac8074250 2026-03-10T13:38:00.921 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.921+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7faac00951c0 con 0x7faac8074250 2026-03-10T13:38:00.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.921+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7faac005e130 con 0x7faac8074250 2026-03-10T13:38:00.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- 192.168.123.105:0/1419631558 shutdown_connections 2026-03-10T13:38:00.922 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- 192.168.123.105:0/1419631558 wait complete. 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 Processor -- start 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- start start 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f18e81c1080 con 0x7f18e807ae70 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f18e81c2260 con 0x7f18e8074230 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.919+0000 7f18eee46640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f18e81c3440 con 0x7f18e807a2b0 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.921+0000 7f18ecbbb640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f18e8074230 0x7f18e8084530 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:42104/0 (socket says 192.168.123.105:42104) 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.921+0000 7f18ecbbb640 1 -- 192.168.123.105:0/219451905 learned_addr learned my addr 192.168.123.105:0/219451905 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.922+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3529823753 0 0) 0x7f18e81c1080 con 0x7f18e807ae70 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.922+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f18cc003260 con 0x7f18e807ae70 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.922+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4100503113 0 0) 0x7f18e81c3440 con 0x7f18e807a2b0 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.922+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f18e81c1080 con 0x7f18e807a2b0 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.922+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1135397031 0 0) 0x7f18e81c2260 con 0x7f18e8074230 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.923+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f18e81c3440 con 0x7f18e8074230 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.923+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3523637222 0 0) 0x7f18cc003260 con 0x7f18e807ae70 2026-03-10T13:38:00.923 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.923+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f18e81c2260 con 0x7f18e807ae70 2026-03-10T13:38:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3587733294' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1535815681' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.923+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2956184521 0 0) 0x7f18e81c1080 con 0x7f18e807a2b0 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.923+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f18cc003260 con 0x7f18e807a2b0 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2970011624 0 0) 0x7f18e81c3440 con 0x7f18e8074230 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f18e81c1080 con 0x7f18e8074230 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f18d8003110 con 0x7f18e807ae70 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f18dc002f30 con 0x7f18e807a2b0 2026-03-10T13:38:00.924 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f18e00034a0 con 0x7f18e8074230 2026-03-10T13:38:00.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3815344447 0 0) 0x7f18e81c2260 con 0x7f18e807ae70 2026-03-10T13:38:00.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.924+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 >> v1:192.168.123.105:6790/0 conn(0x7f18e807a2b0 legacy=0x7f18e807a760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.925+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 >> v1:192.168.123.109:6789/0 conn(0x7f18e8074230 legacy=0x7f18e8084530 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.925 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.925+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f18e81c4620 con 0x7f18e807ae70 2026-03-10T13:38:00.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.925+0000 7f18eee46640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f18e81c2490 con 0x7f18e807ae70 2026-03-10T13:38:00.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.925+0000 7f18eee46640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f18e81c2950 con 0x7f18e807ae70 2026-03-10T13:38:00.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.926+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f18d8003b10 con 0x7f18e807ae70 2026-03-10T13:38:00.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.926+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f18d8004990 con 0x7f18e807ae70 2026-03-10T13:38:00.927 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.927+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f18d8005c40 con 0x7f18e807ae70 2026-03-10T13:38:00.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.930+0000 7fdd9b298640 1 -- 192.168.123.105:0/3313824053 >> v1:192.168.123.109:6789/0 conn(0x7fdd8c0074f0 legacy=0x7fdd8c009970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.930+0000 7fdd9b298640 1 -- 192.168.123.105:0/3313824053 shutdown_connections 2026-03-10T13:38:00.931 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.930+0000 7fdd9b298640 1 -- 192.168.123.105:0/3313824053 >> 192.168.123.105:0/3313824053 conn(0x7fdd8c01a440 msgr2=0x7fdd8c01a850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.930+0000 7fdd9b298640 1 -- 192.168.123.105:0/3313824053 shutdown_connections 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.930+0000 7fdd9b298640 1 -- 192.168.123.105:0/3313824053 wait complete. 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd9b298640 1 Processor -- start 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd9b298640 1 -- start start 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd9b298640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdd8c016310 con 0x7fdd8c00b180 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd9b298640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdd8c0164e0 con 0x7fdd8c015e60 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd9b298640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fdd8c0d84f0 con 0x7fdd8c0aa780 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd99a95640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fdd8c0aa780 0x7fdd8c015750 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:58986/0 (socket says 192.168.123.105:58986) 2026-03-10T13:38:00.932 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.931+0000 7fdd99a95640 1 -- 192.168.123.105:0/1998525663 learned_addr learned my addr 192.168.123.105:0/1998525663 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.932+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 691560255 0 0) 0x7fdd8c0d84f0 con 0x7fdd8c0aa780 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.932+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fdd6c003620 con 0x7fdd8c0aa780 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3370658520 0 0) 0x7fdd8c0164e0 con 0x7fdd8c015e60 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fdd8c0d84f0 con 0x7fdd8c015e60 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 756375939 0 0) 0x7fdd8c016310 con 0x7fdd8c00b180 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fdd8c0164e0 con 0x7fdd8c00b180 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2022497695 0 0) 0x7fdd6c003620 con 0x7fdd8c0aa780 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdd8c016310 con 0x7fdd8c0aa780 2026-03-10T13:38:00.933 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3390811500 0 0) 0x7fdd8c0d84f0 con 0x7fdd8c015e60 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdd6c003620 con 0x7fdd8c015e60 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3683982915 0 0) 0x7fdd8c0164e0 con 0x7fdd8c00b180 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fdd8c0d84f0 con 0x7fdd8c00b180 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdd94050f30 con 0x7fdd8c0aa780 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdd84002ef0 con 0x7fdd8c015e60 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.933+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdd90003120 con 0x7fdd8c00b180 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4275516463 0 0) 0x7fdd8c016310 con 0x7fdd8c0aa780 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 >> v1:192.168.123.109:6789/0 conn(0x7fdd8c015e60 legacy=0x7fdd8c0d4db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 >> v1:192.168.123.105:6789/0 conn(0x7fdd8c00b180 legacy=0x7fdd8c0a4e60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.934 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdd8c0d96d0 con 0x7fdd8c0aa780 2026-03-10T13:38:00.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd9b298640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fdd8c0d86c0 con 0x7fdd8c0aa780 2026-03-10T13:38:00.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.934+0000 7fdd9b298640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fdd8c0d8c00 con 0x7fdd8c0aa780 2026-03-10T13:38:00.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.936+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fdd94063510 con 0x7fdd8c0aa780 2026-03-10T13:38:00.936 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.936+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fdd9406a810 con 0x7fdd8c0aa780 2026-03-10T13:38:00.937 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.937+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7fdd94083090 con 0x7fdd8c0aa780 2026-03-10T13:38:00.938 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.938+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f18d80948e0 con 0x7f18e807ae70 2026-03-10T13:38:00.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.938+0000 7f18eee46640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f18b0005180 con 0x7f18e807ae70 2026-03-10T13:38:00.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.939+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7fdd940fa730 con 0x7fdd8c0aa780 2026-03-10T13:38:00.942 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.939+0000 7fdd9b298640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fdd64005180 con 0x7fdd8c0aa780 2026-03-10T13:38:00.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.942+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f18d805d850 con 0x7f18e807ae70 2026-03-10T13:38:00.943 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.943+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fdd940c36a0 con 0x7fdd8c0aa780 2026-03-10T13:38:00.963 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.962+0000 7f093733c640 1 -- 192.168.123.105:0/2194203963 >> v1:192.168.123.105:6790/0 conn(0x7f09280aa780 legacy=0x7f09280aab60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.963+0000 7f093733c640 1 -- 192.168.123.105:0/2194203963 shutdown_connections 2026-03-10T13:38:00.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.963+0000 7f093733c640 1 -- 192.168.123.105:0/2194203963 >> 192.168.123.105:0/2194203963 conn(0x7f092801a440 msgr2=0x7f092801a850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.963+0000 7f093733c640 1 -- 192.168.123.105:0/2194203963 shutdown_connections 2026-03-10T13:38:00.964 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.963+0000 7f093733c640 1 -- 192.168.123.105:0/2194203963 wait complete. 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.965+0000 7f5db6ab8640 1 -- 192.168.123.105:0/3937564959 >> v1:192.168.123.105:6790/0 conn(0x7f5db00772b0 legacy=0x7f5db0079750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.965+0000 7f5db6ab8640 1 -- 192.168.123.105:0/3937564959 shutdown_connections 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093733c640 1 Processor -- start 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093733c640 1 -- start start 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093733c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f09280d5eb0 con 0x7f09280aa780 2026-03-10T13:38:00.966 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.965+0000 7f5db6ab8640 1 -- 192.168.123.105:0/3937564959 >> 192.168.123.105:0/3937564959 conn(0x7f5db006e900 msgr2=0x7f5db006ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093733c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f09280d70b0 con 0x7f092800b180 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093733c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f09280d82b0 con 0x7f09280074f0 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093633a640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f09280074f0 0x7f09280a51a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:59008/0 (socket says 192.168.123.105:59008) 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.964+0000 7f093633a640 1 -- 192.168.123.105:0/4108185877 learned_addr learned my addr 192.168.123.105:0/4108185877 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2784241830 0 0) 0x7f09280d82b0 con 0x7f09280074f0 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f0900003620 con 0x7f09280074f0 2026-03-10T13:38:00.967 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4120465069 0 0) 0x7f09280d70b0 con 0x7f092800b180 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f09280d82b0 con 0x7f092800b180 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 291805381 0 0) 0x7f09280d5eb0 con 0x7f09280aa780 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.967+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f09280d70b0 con 0x7f09280aa780 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1308976333 0 0) 0x7f0900003620 con 0x7f09280074f0 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f09280d5eb0 con 0x7f09280074f0 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 92587648 0 0) 0x7f09280d70b0 con 0x7f09280aa780 2026-03-10T13:38:00.968 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f0900003620 con 0x7f09280aa780 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0930051260 con 0x7f09280074f0 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.969+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 438804544 0 0) 0x7f09280d82b0 con 0x7f092800b180 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- 192.168.123.105:0/3937564959 shutdown_connections 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- 192.168.123.105:0/3937564959 wait complete. 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 Processor -- start 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- start start 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5db013a280 con 0x7f5db0074230 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5db013b480 con 0x7f5db007ae70 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.969+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f09280d70b0 con 0x7f092800b180 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.966+0000 7f5db6ab8640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5db013c680 con 0x7f5db01364c0 2026-03-10T13:38:00.969 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f5db502e640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f5db01364c0 0x7f5db0138980 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:59012/0 (socket says 192.168.123.105:59012) 2026-03-10T13:38:00.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.968+0000 7f5db502e640 1 -- 192.168.123.105:0/2667715510 learned_addr learned my addr 192.168.123.105:0/2667715510 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:00.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.969+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 81884581 0 0) 0x7f5db013a280 con 0x7f5db0074230 2026-03-10T13:38:00.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.970+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f092c002fb0 con 0x7f09280aa780 2026-03-10T13:38:00.970 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.970+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1849468932 0 0) 0x7f09280d5eb0 con 0x7f09280074f0 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.970+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 >> v1:192.168.123.109:6789/0 conn(0x7f092800b180 legacy=0x7f0928015750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5d8c003620 con 0x7f5db0074230 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 >> v1:192.168.123.105:6789/0 conn(0x7f09280aa780 legacy=0x7f0928015f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1400273899 0 0) 0x7f5db013b480 con 0x7f5db007ae70 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f09280d94b0 con 0x7f09280074f0 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5db013a280 con 0x7f5db007ae70 2026-03-10T13:38:00.971 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 931012000 0 0) 0x7f5db013c680 con 0x7f5db01364c0 2026-03-10T13:38:00.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5db013b480 con 0x7f5db01364c0 2026-03-10T13:38:00.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.972+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 311350537 0 0) 0x7f5d8c003620 con 0x7f5db0074230 2026-03-10T13:38:00.972 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.972+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5db013c680 con 0x7f5db0074230 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.972+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2494934675 0 0) 0x7f5db013a280 con 0x7f5db007ae70 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.972+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5d8c003620 con 0x7f5db007ae70 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2219973173 0 0) 0x7f5db013b480 con 0x7f5db01364c0 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5db013a280 con 0x7f5db01364c0 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5da0003120 con 0x7f5db0074230 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5da4002fd0 con 0x7f5db007ae70 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5da8003410 con 0x7f5db01364c0 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3220816531 0 0) 0x7f5db013c680 con 0x7f5db0074230 2026-03-10T13:38:00.973 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 >> v1:192.168.123.105:6790/0 conn(0x7f5db01364c0 legacy=0x7f5db0138980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.973+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 >> v1:192.168.123.109:6789/0 conn(0x7f5db007ae70 legacy=0x7f5db010b700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:00.974 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.974+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5db013d880 con 0x7f5db0074230 2026-03-10T13:38:00.979 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.971+0000 7f093733c640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f09280d72e0 con 0x7f09280074f0 2026-03-10T13:38:00.981 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.979+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5db013b6b0 con 0x7f5db0074230 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.972+0000 7f093733c640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f09280d7820 con 0x7f09280074f0 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.981+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f0930050f30 con 0x7f09280074f0 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.981+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f0930069900 con 0x7f09280074f0 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.981+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f0930069b60 con 0x7f09280074f0 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.982+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f09300f9450 con 0x7f09280074f0 2026-03-10T13:38:00.982 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.979+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5db013bbc0 con 0x7f5db0074230 2026-03-10T13:38:00.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.982+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5da0003a80 con 0x7f5db0074230 2026-03-10T13:38:00.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.982+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5da0004b90 con 0x7f5db0074230 2026-03-10T13:38:00.983 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.983+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f5da0004e10 con 0x7f5db0074230 2026-03-10T13:38:00.986 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.986+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f08f8005180 con 0x7f09280074f0 2026-03-10T13:38:00.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.987+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f5da0094db0 con 0x7f5db0074230 2026-03-10T13:38:00.989 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.987+0000 7f5d7f7fe640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5d78005180 con 0x7f5db0074230 2026-03-10T13:38:00.990 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.990+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f09300c3b70 con 0x7f09280074f0 2026-03-10T13:38:00.993 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:00.993+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5da005dd20 con 0x7f5db0074230 2026-03-10T13:38:01.022 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.021+0000 7f63bd78f640 1 -- 192.168.123.105:0/2463810731 >> v1:192.168.123.105:6789/0 conn(0x7f63b811a770 legacy=0x7f63b811cb60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.024+0000 7f63bd78f640 1 -- 192.168.123.105:0/2463810731 shutdown_connections 2026-03-10T13:38:01.024 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.024+0000 7f63bd78f640 1 -- 192.168.123.105:0/2463810731 >> 192.168.123.105:0/2463810731 conn(0x7f63b806e900 msgr2=0x7f63b806ed10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- 192.168.123.105:0/2463810731 shutdown_connections 2026-03-10T13:38:01.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- 192.168.123.105:0/2463810731 wait complete. 2026-03-10T13:38:01.027 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 Processor -- start 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- start start 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f63b81c1010 con 0x7f63b811a770 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f63b81c2210 con 0x7f63b811e280 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63bd78f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f63b81c3410 con 0x7f63b8074230 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63b6ffd640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f63b8074230 0x7f63b810e250 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:59030/0 (socket says 192.168.123.105:59030) 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.027+0000 7f63b6ffd640 1 -- 192.168.123.105:0/3535532968 learned_addr learned my addr 192.168.123.105:0/3535532968 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:01.028 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.028+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1931363846 0 0) 0x7f63b81c3410 con 0x7f63b8074230 2026-03-10T13:38:01.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.028+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6388003620 con 0x7f63b8074230 2026-03-10T13:38:01.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.028+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1308639960 0 0) 0x7f63b81c1010 con 0x7f63b811a770 2026-03-10T13:38:01.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.028+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f63b81c3410 con 0x7f63b811a770 2026-03-10T13:38:01.029 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.029+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2700211856 0 0) 0x7f63b81c2210 con 0x7f63b811e280 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f63b81c1010 con 0x7f63b811e280 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1426505127 0 0) 0x7f6388003620 con 0x7f63b8074230 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f63b81c2210 con 0x7f63b8074230 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 741163390 0 0) 0x7f63b81c3410 con 0x7f63b811a770 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6388003620 con 0x7f63b811a770 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f63a80030d0 con 0x7f63b8074230 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f63a4003180 con 0x7f63b811a770 2026-03-10T13:38:01.031 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3955739295 0 0) 0x7f63b81c1010 con 0x7f63b811e280 2026-03-10T13:38:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f63b81c3410 con 0x7f63b811e280 2026-03-10T13:38:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4092002892 0 0) 0x7f63b81c2210 con 0x7f63b8074230 2026-03-10T13:38:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 >> v1:192.168.123.109:6789/0 conn(0x7f63b811e280 legacy=0x7f63b81bf7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 >> v1:192.168.123.105:6789/0 conn(0x7f63b811a770 legacy=0x7f63b8118db0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.032 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.031+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f63b81c4610 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.032+0000 7f63bd78f640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f63b81c1240 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.032+0000 7f63bd78f640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f63b81c17d0 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.032+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f63a8003b60 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.032+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f63a8005bc0 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.032+0000 7f63bd78f640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6384005180 con 0x7f63b8074230 2026-03-10T13:38:01.036 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.033+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f63a8006e70 con 0x7f63b8074230 2026-03-10T13:38:01.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.037+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f63a8095b70 con 0x7f63b8074230 2026-03-10T13:38:01.037 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.037+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f63a80c59f0 con 0x7f63b8074230 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stdout:176093659143 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.165+0000 7f18eee46640 1 -- 192.168.123.105:0/219451905 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f18b0005470 con 0x7f18e807ae70 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.166+0000 7fdd9b298640 1 -- 192.168.123.105:0/1998525663 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7fdd64005470 con 0x7fdd8c0aa780 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.167+0000 7fdd8b7fe640 1 -- 192.168.123.105:0/1998525663 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (unknown 1274345170 0 1466224786) 0x7fdd940c7350 con 0x7fdd8c0aa780 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.171+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 >> v1:192.168.123.105:6800/3845654103 conn(0x7fdd6c078940 legacy=0x7fdd6c07ae00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.171+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 >> v1:192.168.123.105:6790/0 conn(0x7fdd8c0aa780 legacy=0x7fdd8c015750 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.172+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 shutdown_connections 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.172+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 >> 192.168.123.105:0/1998525663 conn(0x7fdd8c01a440 msgr2=0x7fdd8c00b870 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.172+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 shutdown_connections 2026-03-10T13:38:01.173 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.172+0000 7fdd897fa640 1 -- 192.168.123.105:0/1998525663 wait complete. 2026-03-10T13:38:01.175 INFO:teuthology.orchestra.run.vm05.stdout:103079215118 2026-03-10T13:38:01.176 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.174+0000 7f18e5ffb640 1 -- 192.168.123.105:0/219451905 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (unknown 383520633 0 4218042294) 0x7f18d8061500 con 0x7f18e807ae70 2026-03-10T13:38:01.177 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.177+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 >> v1:192.168.123.105:6800/3845654103 conn(0x7f18cc07cfb0 legacy=0x7f18cc07f470 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.177+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 >> v1:192.168.123.105:6789/0 conn(0x7f18e807ae70 legacy=0x7f18e8084c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.177+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 shutdown_connections 2026-03-10T13:38:01.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.177+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 >> 192.168.123.105:0/219451905 conn(0x7f18e806e900 msgr2=0x7f18e807e300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.178+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 shutdown_connections 2026-03-10T13:38:01.178 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.178+0000 7f18bf7fe640 1 -- 192.168.123.105:0/219451905 wait complete. 2026-03-10T13:38:01.290 INFO:teuthology.orchestra.run.vm05.stdout:197568495621 2026-03-10T13:38:01.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.288+0000 7f63bd78f640 1 -- 192.168.123.105:0/3535532968 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7f6384005470 con 0x7f63b8074230 2026-03-10T13:38:01.290 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.289+0000 7f63b4ff9640 1 -- 192.168.123.105:0/3535532968 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (unknown 1482059429 0 791086196) 0x7f63a805eae0 con 0x7f63b8074230 2026-03-10T13:38:01.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 >> v1:192.168.123.105:6800/3845654103 conn(0x7f63880786b0 legacy=0x7f638807ab70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 >> v1:192.168.123.105:6790/0 conn(0x7f63b8074230 legacy=0x7f63b810e250 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 shutdown_connections 2026-03-10T13:38:01.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 >> 192.168.123.105:0/3535532968 conn(0x7f63b806e900 msgr2=0x7f63b8072bd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.292 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 shutdown_connections 2026-03-10T13:38:01.293 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.292+0000 7f63adffb640 1 -- 192.168.123.105:0/3535532968 wait complete. 2026-03-10T13:38:01.321 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.321+0000 7f5d7f7fe640 1 -- 192.168.123.105:0/2667715510 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f5d78005470 con 0x7f5db0074230 2026-03-10T13:38:01.322 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.321+0000 7f5dadffb640 1 -- 192.168.123.105:0/2667715510 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (unknown 574334944 0 2410200800) 0x7f5da00619d0 con 0x7f5db0074230 2026-03-10T13:38:01.322 INFO:teuthology.orchestra.run.vm05.stdout:38654705684 2026-03-10T13:38:01.335 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.334+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 >> v1:192.168.123.105:6800/3845654103 conn(0x7f5d8c078a50 legacy=0x7f5d8c07af10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.339+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 >> v1:192.168.123.105:6789/0 conn(0x7f5db0074230 legacy=0x7f5db010aff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.339+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 shutdown_connections 2026-03-10T13:38:01.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.340+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 >> 192.168.123.105:0/2667715510 conn(0x7f5db006e900 msgr2=0x7f5db007e610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.340 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.340+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 shutdown_connections 2026-03-10T13:38:01.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.340+0000 7f5db6ab8640 1 -- 192.168.123.105:0/2667715510 wait complete. 2026-03-10T13:38:01.376 INFO:tasks.cephadm.ceph_manager.ceph:need seq 103079215117 got 103079215118 for osd.3 2026-03-10T13:38:01.377 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.390 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.1 2026-03-10T13:38:01.391 INFO:tasks.cephadm.ceph_manager.ceph:need seq 176093659143 got 176093659143 for osd.6 2026-03-10T13:38:01.391 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.398+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f08f8005470 con 0x7f09280074f0 2026-03-10T13:38:01.402 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.401+0000 7f09237fe640 1 -- 192.168.123.105:0/4108185877 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (unknown 92182286 0 3078149147) 0x7f09300c7820 con 0x7f09280074f0 2026-03-10T13:38:01.402 INFO:teuthology.orchestra.run.vm05.stdout:73014444048 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.408+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 >> v1:192.168.123.105:6800/3845654103 conn(0x7f0900078970 legacy=0x7f090007ae30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.408+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 >> v1:192.168.123.105:6790/0 conn(0x7f09280074f0 legacy=0x7f09280a51a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.409+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 shutdown_connections 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.409+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 >> 192.168.123.105:0/4108185877 conn(0x7f092801a440 msgr2=0x7f0928006580 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.409+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 shutdown_connections 2026-03-10T13:38:01.411 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.409+0000 7f09217fa640 1 -- 192.168.123.105:0/4108185877 wait complete. 2026-03-10T13:38:01.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.415+0000 7faad03d4640 1 -- 192.168.123.105:0/488296344 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7faa98005470 con 0x7faac8074250 2026-03-10T13:38:01.418 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.415+0000 7faab77fe640 1 -- 192.168.123.105:0/488296344 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (unknown 1823589948 0 3140853187) 0x7faac0061de0 con 0x7faac8074250 2026-03-10T13:38:01.419 INFO:teuthology.orchestra.run.vm05.stdout:120259084300 2026-03-10T13:38:01.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.439+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 >> v1:192.168.123.105:6800/3845654103 conn(0x7faaa8078860 legacy=0x7faaa807ad20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.442 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.439+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 >> v1:192.168.123.109:6789/0 conn(0x7faac8074250 legacy=0x7faac8117390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.441+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 shutdown_connections 2026-03-10T13:38:01.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.441+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 >> 192.168.123.105:0/488296344 conn(0x7faac806d730 msgr2=0x7faac80737e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.443 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.442+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 shutdown_connections 2026-03-10T13:38:01.445 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.443+0000 7faab57fa640 1 -- 192.168.123.105:0/488296344 wait complete. 2026-03-10T13:38:01.530 INFO:tasks.cephadm.ceph_manager.ceph:need seq 38654705684 got 38654705684 for osd.0 2026-03-10T13:38:01.530 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.600 INFO:tasks.cephadm.ceph_manager.ceph:need seq 197568495621 got 197568495621 for osd.7 2026-03-10T13:38:01.601 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.628 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084299 got 120259084300 for osd.4 2026-03-10T13:38:01.628 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.630 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444047 got 73014444048 for osd.2 2026-03-10T13:38:01.630 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:01.677 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:01.703 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.6 KiB/s wr, 180 op/s 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1998525663' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/219451905' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3535532968' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2667715510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4108185877' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/488296344' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.6 KiB/s wr, 180 op/s 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1998525663' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/219451905' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3535532968' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2667715510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4108185877' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:38:01.704 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/488296344' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.806+0000 7f777bf58640 1 -- 192.168.123.105:0/2012965816 >> v1:192.168.123.105:6790/0 conn(0x7f777410a910 legacy=0x7f777410acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.807+0000 7f777bf58640 1 -- 192.168.123.105:0/2012965816 shutdown_connections 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.807+0000 7f777bf58640 1 -- 192.168.123.105:0/2012965816 >> 192.168.123.105:0/2012965816 conn(0x7f77741005f0 msgr2=0x7f7774102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.807+0000 7f777bf58640 1 -- 192.168.123.105:0/2012965816 shutdown_connections 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 -- 192.168.123.105:0/2012965816 wait complete. 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 Processor -- start 2026-03-10T13:38:01.808 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 -- start start 2026-03-10T13:38:01.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f77741ab720 con 0x7f777410a910 2026-03-10T13:38:01.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f77741ac920 con 0x7f777410d7c0 2026-03-10T13:38:01.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f777bf58640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f77741adb20 con 0x7f7774111360 2026-03-10T13:38:01.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.808+0000 7f7779ccd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f777410a910 0x7f77741109a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60406/0 (socket says 192.168.123.105:60406) 2026-03-10T13:38:01.809 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7779ccd640 1 -- 192.168.123.105:0/2289394272 learned_addr learned my addr 192.168.123.105:0/2289394272 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2622295216 0 0) 0x7f77741ab720 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7750003620 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2798182444 0 0) 0x7f77741adb20 con 0x7f7774111360 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f77741ab720 con 0x7f7774111360 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3697396978 0 0) 0x7f7750003620 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f77741adb20 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.809+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f77680041f0 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 589572745 0 0) 0x7f77741adb20 con 0x7f777410a910 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 >> v1:192.168.123.105:6790/0 conn(0x7f7774111360 legacy=0x7f77741a9e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 >> v1:192.168.123.109:6789/0 conn(0x7f777410d7c0 legacy=0x7f77741a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.810 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f77741aed20 con 0x7f777410a910 2026-03-10T13:38:01.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f77741ab950 con 0x7f777410a910 2026-03-10T13:38:01.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7768003ca0 con 0x7f777410a910 2026-03-10T13:38:01.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.810+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7768005000 con 0x7f777410a910 2026-03-10T13:38:01.811 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.811+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f77741abf00 con 0x7f777410a910 2026-03-10T13:38:01.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.812+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f7768003e50 con 0x7f777410a910 2026-03-10T13:38:01.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.812+0000 7f7760ff9640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7774106050 con 0x7f777410a910 2026-03-10T13:38:01.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.812+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f7768093ba0 con 0x7f777410a910 2026-03-10T13:38:01.815 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.815+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f776805e270 con 0x7f777410a910 2026-03-10T13:38:01.822 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph osd last-stat-seq osd.5 2026-03-10T13:38:01.912 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.911+0000 7f7760ff9640 1 -- 192.168.123.105:0/2289394272 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f7774117110 con 0x7f777410a910 2026-03-10T13:38:01.913 INFO:teuthology.orchestra.run.vm05.stdout:55834574866 2026-03-10T13:38:01.913 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.912+0000 7f7762ffd640 1 -- 192.168.123.105:0/2289394272 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (unknown 832126871 0 311403250) 0x7f7768061f20 con 0x7f777410a910 2026-03-10T13:38:01.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 >> v1:192.168.123.105:6800/3845654103 conn(0x7f7750077e50 legacy=0x7f775007a310 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.915 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 >> v1:192.168.123.105:6789/0 conn(0x7f777410a910 legacy=0x7f77741109a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:01.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 shutdown_connections 2026-03-10T13:38:01.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 >> 192.168.123.105:0/2289394272 conn(0x7f77741005f0 msgr2=0x7f7774103500 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:01.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 shutdown_connections 2026-03-10T13:38:01.916 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:01.915+0000 7f777bf58640 1 -- 192.168.123.105:0/2289394272 wait complete. 2026-03-10T13:38:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 5.6 KiB/s wr, 180 op/s 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1998525663' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/219451905' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3535532968' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2667715510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4108185877' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:38:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/488296344' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T13:38:02.072 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574866 got 55834574866 for osd.1 2026-03-10T13:38:02.072 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:02.095 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:02.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.227+0000 7ff25472a640 1 -- 192.168.123.105:0/1843187787 >> v1:192.168.123.109:6789/0 conn(0x7ff24c1158c0 legacy=0x7ff24c117d60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.228+0000 7ff25472a640 1 -- 192.168.123.105:0/1843187787 shutdown_connections 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.228+0000 7ff25472a640 1 -- 192.168.123.105:0/1843187787 >> 192.168.123.105:0/1843187787 conn(0x7ff24c078600 msgr2=0x7ff24c07aa20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.228+0000 7ff25472a640 1 -- 192.168.123.105:0/1843187787 shutdown_connections 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.228+0000 7ff25472a640 1 -- 192.168.123.105:0/1843187787 wait complete. 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.228+0000 7ff25472a640 1 Processor -- start 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff25472a640 1 -- start start 2026-03-10T13:38:02.229 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff25472a640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff24c1b8710 con 0x7ff24c109480 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff25472a640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff24c1b98f0 con 0x7ff24c1158c0 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff25472a640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff24c1baaf0 con 0x7ff24c111d20 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff252ca0640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7ff24c1158c0 0x7ff24c1b6f60 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:42212/0 (socket says 192.168.123.105:42212) 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff252ca0640 1 -- 192.168.123.105:0/800685041 learned_addr learned my addr 192.168.123.105:0/800685041 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 418133728 0 0) 0x7ff24c1b98f0 con 0x7ff24c1158c0 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.229+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff220003620 con 0x7ff24c1158c0 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2994470481 0 0) 0x7ff220003620 con 0x7ff24c1158c0 2026-03-10T13:38:02.230 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff24c1b98f0 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff248003a00 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2635226770 0 0) 0x7ff24c1b98f0 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 >> v1:192.168.123.105:6790/0 conn(0x7ff24c111d20 legacy=0x7ff24c082a90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 >> v1:192.168.123.105:6789/0 conn(0x7ff24c109480 legacy=0x7ff24c10e2b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.230+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff24c1bbcf0 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.231+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff24c1b8940 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.231+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff2480032f0 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.231+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff2480052d0 con 0x7ff24c1158c0 2026-03-10T13:38:02.231 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.231+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ff24c1b8ef0 con 0x7ff24c1158c0 2026-03-10T13:38:02.232 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.232+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff21c005180 con 0x7ff24c1158c0 2026-03-10T13:38:02.234 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.234+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7ff248005470 con 0x7ff24c1158c0 2026-03-10T13:38:02.235 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.235+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7ff248095540 con 0x7ff24c1158c0 2026-03-10T13:38:02.240 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.240+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff24805e4b0 con 0x7ff24c1158c0 2026-03-10T13:38:02.341 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.340+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7ff21c005470 con 0x7ff24c1158c0 2026-03-10T13:38:02.342 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.342+0000 7ff23b7fe640 1 -- 192.168.123.105:0/800685041 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (unknown 2131975755 0 3857547566) 0x7ff248062160 con 0x7ff24c1158c0 2026-03-10T13:38:02.343 INFO:teuthology.orchestra.run.vm05.stdout:150323855369 2026-03-10T13:38:02.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.344+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 >> v1:192.168.123.105:6800/3845654103 conn(0x7ff220078190 legacy=0x7ff22007a650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.344+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 >> v1:192.168.123.109:6789/0 conn(0x7ff24c1158c0 legacy=0x7ff24c1b6f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.344 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.344+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 shutdown_connections 2026-03-10T13:38:02.345 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.344+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 >> 192.168.123.105:0/800685041 conn(0x7ff24c078600 msgr2=0x7ff24c118d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:02.345 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.344+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 shutdown_connections 2026-03-10T13:38:02.345 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.345+0000 7ff25472a640 1 -- 192.168.123.105:0/800685041 wait complete. 2026-03-10T13:38:02.522 INFO:tasks.cephadm.ceph_manager.ceph:need seq 150323855369 got 150323855369 for osd.5 2026-03-10T13:38:02.522 DEBUG:teuthology.parallel:result is None 2026-03-10T13:38:02.522 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T13:38:02.522 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph pg dump --format=json 2026-03-10T13:38:02.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2289394272' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:02.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/800685041' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:02.740 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2289394272' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:02.897 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/800685041' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.896+0000 7f5de1e8d640 1 -- 192.168.123.105:0/1280265257 >> v1:192.168.123.105:6789/0 conn(0x7f5ddc10d7f0 legacy=0x7f5ddc10fbe0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.897+0000 7f5de1e8d640 1 -- 192.168.123.105:0/1280265257 shutdown_connections 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.897+0000 7f5de1e8d640 1 -- 192.168.123.105:0/1280265257 >> 192.168.123.105:0/1280265257 conn(0x7f5ddc100620 msgr2=0x7f5ddc102a40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.897+0000 7f5de1e8d640 1 -- 192.168.123.105:0/1280265257 shutdown_connections 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.897+0000 7f5de1e8d640 1 -- 192.168.123.105:0/1280265257 wait complete. 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5de1e8d640 1 Processor -- start 2026-03-10T13:38:02.898 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5de1e8d640 1 -- start start 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5de1e8d640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5ddc111000 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5de1e8d640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5ddc1acde0 con 0x7f5ddc10a940 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5de1e8d640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f5ddc1adfc0 con 0x7f5ddc111390 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5ddaffd640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f5ddc10d7f0 0x7f5ddc10e620 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60428/0 (socket says 192.168.123.105:60428) 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.898+0000 7f5ddaffd640 1 -- 192.168.123.105:0/3863319795 learned_addr learned my addr 192.168.123.105:0/3863319795 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:02.899 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 868855776 0 0) 0x7f5ddc111000 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5da4003620 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1237959871 0 0) 0x7f5ddc1acde0 con 0x7f5ddc10a940 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5ddc111000 con 0x7f5ddc10a940 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1461469731 0 0) 0x7f5da4003620 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5ddc1acde0 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5dc8003230 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.899+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3510984454 0 0) 0x7f5ddc111000 con 0x7f5ddc10a940 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5da4003620 con 0x7f5ddc10a940 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5dcc002f70 con 0x7f5ddc10a940 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3001316598 0 0) 0x7f5ddc1acde0 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 >> v1:192.168.123.105:6790/0 conn(0x7f5ddc111390 legacy=0x7f5ddc1aa6b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 >> v1:192.168.123.109:6789/0 conn(0x7f5ddc10a940 legacy=0x7f5ddc10df10 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:02.900 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5ddc1af1a0 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.901 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f5ddc1acfb0 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.900+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f5ddc1ad510 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.901+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f5dc8003e20 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.901+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5dc80053a0 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.901+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f5dc8005540 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.902 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.902+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f5dc8095c10 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.902+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5da8005180 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.905 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:02.905+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5dc805ef30 con 0x7f5ddc10d7f0 2026-03-10T13:38:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2289394272' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:38:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/800685041' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T13:38:02.924 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:02 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:02.818Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T13:38:03.001 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.000+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f5da8002bf0 con 0x7f5da4078330 2026-03-10T13:38:03.006 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.006+0000 7f5dd8ff9640 1 -- 192.168.123.105:0/3863319795 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+347370 (unknown 2965378022 0 1952213287) 0x7f5da8002bf0 con 0x7f5da4078330 2026-03-10T13:38:03.006 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:38:03.009 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.011+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 >> v1:192.168.123.105:6800/3845654103 conn(0x7f5da4078330 legacy=0x7f5da407a7f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.011+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 >> v1:192.168.123.105:6789/0 conn(0x7f5ddc10d7f0 legacy=0x7f5ddc10e620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.011+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 shutdown_connections 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.011+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 >> 192.168.123.105:0/3863319795 conn(0x7f5ddc100620 msgr2=0x7f5ddc1146d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.012+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 shutdown_connections 2026-03-10T13:38:03.012 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.012+0000 7f5de1e8d640 1 -- 192.168.123.105:0/3863319795 wait complete. 2026-03-10T13:38:03.162 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":116,"stamp":"2026-03-10T13:38:02.558084+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":775,"num_read_kb":518,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":389,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220676,"kb_used_data":5980,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518716,"statfs":{"total":171765137408,"available":171539165184,"internally_reserved":0,"allocated":6123520,"data_stored":3150993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4325,"num_objects":183,"num_object_clones":0,"num_object_copies":549,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":183,"num_whiteouts":0,"num_read":705,"num_read_kb":461,"num_write":421,"num_write_kb":35,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001841"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605435+0000","last_change":"2026-03-10T13:37:48.261382+0000","last_active":"2026-03-10T13:37:55.605435+0000","last_peered":"2026-03-10T13:37:55.605435+0000","last_clean":"2026-03-10T13:37:55.605435+0000","last_became_active":"2026-03-10T13:37:48.261230+0000","last_became_peered":"2026-03-10T13:37:48.261230+0000","last_unstale":"2026-03-10T13:37:55.605435+0000","last_undegraded":"2026-03-10T13:37:55.605435+0000","last_fullsized":"2026-03-10T13:37:55.605435+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:45:43.801591+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.935315+0000","last_change":"2026-03-10T13:37:50.283127+0000","last_active":"2026-03-10T13:37:55.935315+0000","last_peered":"2026-03-10T13:37:55.935315+0000","last_clean":"2026-03-10T13:37:55.935315+0000","last_became_active":"2026-03-10T13:37:50.282819+0000","last_became_peered":"2026-03-10T13:37:50.282819+0000","last_unstale":"2026-03-10T13:37:55.935315+0000","last_undegraded":"2026-03-10T13:37:55.935315+0000","last_fullsized":"2026-03-10T13:37:55.935315+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:16:08.578891+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.616923+0000","last_change":"2026-03-10T13:37:52.288423+0000","last_active":"2026-03-10T13:37:55.616923+0000","last_peered":"2026-03-10T13:37:55.616923+0000","last_clean":"2026-03-10T13:37:55.616923+0000","last_became_active":"2026-03-10T13:37:52.288205+0000","last_became_peered":"2026-03-10T13:37:52.288205+0000","last_unstale":"2026-03-10T13:37:55.616923+0000","last_undegraded":"2026-03-10T13:37:55.616923+0000","last_fullsized":"2026-03-10T13:37:55.616923+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:28:39.839601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596842+0000","last_change":"2026-03-10T13:37:54.347025+0000","last_active":"2026-03-10T13:37:55.596842+0000","last_peered":"2026-03-10T13:37:55.596842+0000","last_clean":"2026-03-10T13:37:55.596842+0000","last_became_active":"2026-03-10T13:37:54.346925+0000","last_became_peered":"2026-03-10T13:37:54.346925+0000","last_unstale":"2026-03-10T13:37:55.596842+0000","last_undegraded":"2026-03-10T13:37:55.596842+0000","last_fullsized":"2026-03-10T13:37:55.596842+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:17:06.595218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602871+0000","last_change":"2026-03-10T13:37:54.385278+0000","last_active":"2026-03-10T13:37:55.602871+0000","last_peered":"2026-03-10T13:37:55.602871+0000","last_clean":"2026-03-10T13:37:55.602871+0000","last_became_active":"2026-03-10T13:37:54.384853+0000","last_became_peered":"2026-03-10T13:37:54.384853+0000","last_unstale":"2026-03-10T13:37:55.602871+0000","last_undegraded":"2026-03-10T13:37:55.602871+0000","last_fullsized":"2026-03-10T13:37:55.602871+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:40:10.603601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602927+0000","last_change":"2026-03-10T13:37:48.246183+0000","last_active":"2026-03-10T13:37:55.602927+0000","last_peered":"2026-03-10T13:37:55.602927+0000","last_clean":"2026-03-10T13:37:55.602927+0000","last_became_active":"2026-03-10T13:37:48.246098+0000","last_became_peered":"2026-03-10T13:37:48.246098+0000","last_unstale":"2026-03-10T13:37:55.602927+0000","last_undegraded":"2026-03-10T13:37:55.602927+0000","last_fullsized":"2026-03-10T13:37:55.602927+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:13.642013+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.292448+0000","last_change":"2026-03-10T13:37:50.263242+0000","last_active":"2026-03-10T13:37:56.292448+0000","last_peered":"2026-03-10T13:37:56.292448+0000","last_clean":"2026-03-10T13:37:56.292448+0000","last_became_active":"2026-03-10T13:37:50.263011+0000","last_became_peered":"2026-03-10T13:37:50.263011+0000","last_unstale":"2026-03-10T13:37:56.292448+0000","last_undegraded":"2026-03-10T13:37:56.292448+0000","last_fullsized":"2026-03-10T13:37:56.292448+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:53.502158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595906+0000","last_change":"2026-03-10T13:37:52.288304+0000","last_active":"2026-03-10T13:37:55.595906+0000","last_peered":"2026-03-10T13:37:55.595906+0000","last_clean":"2026-03-10T13:37:55.595906+0000","last_became_active":"2026-03-10T13:37:52.279374+0000","last_became_peered":"2026-03-10T13:37:52.279374+0000","last_unstale":"2026-03-10T13:37:55.595906+0000","last_undegraded":"2026-03-10T13:37:55.595906+0000","last_fullsized":"2026-03-10T13:37:55.595906+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:42:36.264701+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629042+0000","last_change":"2026-03-10T13:37:48.261987+0000","last_active":"2026-03-10T13:37:55.629042+0000","last_peered":"2026-03-10T13:37:55.629042+0000","last_clean":"2026-03-10T13:37:55.629042+0000","last_became_active":"2026-03-10T13:37:48.261859+0000","last_became_peered":"2026-03-10T13:37:48.261859+0000","last_unstale":"2026-03-10T13:37:55.629042+0000","last_undegraded":"2026-03-10T13:37:55.629042+0000","last_fullsized":"2026-03-10T13:37:55.629042+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:38:33.629402+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.134466+0000","last_change":"2026-03-10T13:37:50.269444+0000","last_active":"2026-03-10T13:37:56.134466+0000","last_peered":"2026-03-10T13:37:56.134466+0000","last_clean":"2026-03-10T13:37:56.134466+0000","last_became_active":"2026-03-10T13:37:50.269346+0000","last_became_peered":"2026-03-10T13:37:50.269346+0000","last_unstale":"2026-03-10T13:37:56.134466+0000","last_undegraded":"2026-03-10T13:37:56.134466+0000","last_fullsized":"2026-03-10T13:37:56.134466+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:47:35.869366+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628918+0000","last_change":"2026-03-10T13:37:52.334700+0000","last_active":"2026-03-10T13:37:55.628918+0000","last_peered":"2026-03-10T13:37:55.628918+0000","last_clean":"2026-03-10T13:37:55.628918+0000","last_became_active":"2026-03-10T13:37:52.334521+0000","last_became_peered":"2026-03-10T13:37:52.334521+0000","last_unstale":"2026-03-10T13:37:55.628918+0000","last_undegraded":"2026-03-10T13:37:55.628918+0000","last_fullsized":"2026-03-10T13:37:55.628918+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:53:59.212198+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605514+0000","last_change":"2026-03-10T13:37:54.368727+0000","last_active":"2026-03-10T13:37:55.605514+0000","last_peered":"2026-03-10T13:37:55.605514+0000","last_clean":"2026-03-10T13:37:55.605514+0000","last_became_active":"2026-03-10T13:37:54.368260+0000","last_became_peered":"2026-03-10T13:37:54.368260+0000","last_unstale":"2026-03-10T13:37:55.605514+0000","last_undegraded":"2026-03-10T13:37:55.605514+0000","last_fullsized":"2026-03-10T13:37:55.605514+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:47:04.304579+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629079+0000","last_change":"2026-03-10T13:37:48.261678+0000","last_active":"2026-03-10T13:37:55.629079+0000","last_peered":"2026-03-10T13:37:55.629079+0000","last_clean":"2026-03-10T13:37:55.629079+0000","last_became_active":"2026-03-10T13:37:48.261573+0000","last_became_peered":"2026-03-10T13:37:48.261573+0000","last_unstale":"2026-03-10T13:37:55.629079+0000","last_undegraded":"2026-03-10T13:37:55.629079+0000","last_fullsized":"2026-03-10T13:37:55.629079+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:57:07.970943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"58'5","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.085108+0000","last_change":"2026-03-10T13:37:50.294812+0000","last_active":"2026-03-10T13:37:56.085108+0000","last_peered":"2026-03-10T13:37:56.085108+0000","last_clean":"2026-03-10T13:37:56.085108+0000","last_became_active":"2026-03-10T13:37:50.294012+0000","last_became_peered":"2026-03-10T13:37:50.294012+0000","last_unstale":"2026-03-10T13:37:56.085108+0000","last_undegraded":"2026-03-10T13:37:56.085108+0000","last_fullsized":"2026-03-10T13:37:56.085108+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:01:37.351580+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594513+0000","last_change":"2026-03-10T13:37:52.277265+0000","last_active":"2026-03-10T13:37:55.594513+0000","last_peered":"2026-03-10T13:37:55.594513+0000","last_clean":"2026-03-10T13:37:55.594513+0000","last_became_active":"2026-03-10T13:37:52.276994+0000","last_became_peered":"2026-03-10T13:37:52.276994+0000","last_unstale":"2026-03-10T13:37:55.594513+0000","last_undegraded":"2026-03-10T13:37:55.594513+0000","last_fullsized":"2026-03-10T13:37:55.594513+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:55.170143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629151+0000","last_change":"2026-03-10T13:37:54.362426+0000","last_active":"2026-03-10T13:37:55.629151+0000","last_peered":"2026-03-10T13:37:55.629151+0000","last_clean":"2026-03-10T13:37:55.629151+0000","last_became_active":"2026-03-10T13:37:54.362139+0000","last_became_peered":"2026-03-10T13:37:54.362139+0000","last_unstale":"2026-03-10T13:37:55.629151+0000","last_undegraded":"2026-03-10T13:37:55.629151+0000","last_fullsized":"2026-03-10T13:37:55.629151+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:00:49.980455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596717+0000","last_change":"2026-03-10T13:37:54.384072+0000","last_active":"2026-03-10T13:37:55.596717+0000","last_peered":"2026-03-10T13:37:55.596717+0000","last_clean":"2026-03-10T13:37:55.596717+0000","last_became_active":"2026-03-10T13:37:54.383400+0000","last_became_peered":"2026-03-10T13:37:54.383400+0000","last_unstale":"2026-03-10T13:37:55.596717+0000","last_undegraded":"2026-03-10T13:37:55.596717+0000","last_fullsized":"2026-03-10T13:37:55.596717+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:24:49.178116+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.606077+0000","last_change":"2026-03-10T13:37:48.265883+0000","last_active":"2026-03-10T13:37:55.606077+0000","last_peered":"2026-03-10T13:37:55.606077+0000","last_clean":"2026-03-10T13:37:55.606077+0000","last_became_active":"2026-03-10T13:37:48.265512+0000","last_became_peered":"2026-03-10T13:37:48.265512+0000","last_unstale":"2026-03-10T13:37:55.606077+0000","last_undegraded":"2026-03-10T13:37:55.606077+0000","last_fullsized":"2026-03-10T13:37:55.606077+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:56:00.545045+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.181665+0000","last_change":"2026-03-10T13:37:50.272119+0000","last_active":"2026-03-10T13:37:56.181665+0000","last_peered":"2026-03-10T13:37:56.181665+0000","last_clean":"2026-03-10T13:37:56.181665+0000","last_became_active":"2026-03-10T13:37:50.272043+0000","last_became_peered":"2026-03-10T13:37:50.272043+0000","last_unstale":"2026-03-10T13:37:56.181665+0000","last_undegraded":"2026-03-10T13:37:56.181665+0000","last_fullsized":"2026-03-10T13:37:56.181665+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:32:58.947398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617112+0000","last_change":"2026-03-10T13:37:52.289053+0000","last_active":"2026-03-10T13:37:55.617112+0000","last_peered":"2026-03-10T13:37:55.617112+0000","last_clean":"2026-03-10T13:37:55.617112+0000","last_became_active":"2026-03-10T13:37:52.288908+0000","last_became_peered":"2026-03-10T13:37:52.288908+0000","last_unstale":"2026-03-10T13:37:55.617112+0000","last_undegraded":"2026-03-10T13:37:55.617112+0000","last_fullsized":"2026-03-10T13:37:55.617112+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:53:54.242254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602789+0000","last_change":"2026-03-10T13:37:54.385176+0000","last_active":"2026-03-10T13:37:55.602789+0000","last_peered":"2026-03-10T13:37:55.602789+0000","last_clean":"2026-03-10T13:37:55.602789+0000","last_became_active":"2026-03-10T13:37:54.384954+0000","last_became_peered":"2026-03-10T13:37:54.384954+0000","last_unstale":"2026-03-10T13:37:55.602789+0000","last_undegraded":"2026-03-10T13:37:55.602789+0000","last_fullsized":"2026-03-10T13:37:55.602789+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:52:00.808471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596474+0000","last_change":"2026-03-10T13:37:48.256424+0000","last_active":"2026-03-10T13:37:55.596474+0000","last_peered":"2026-03-10T13:37:55.596474+0000","last_clean":"2026-03-10T13:37:55.596474+0000","last_became_active":"2026-03-10T13:37:48.256124+0000","last_became_peered":"2026-03-10T13:37:48.256124+0000","last_unstale":"2026-03-10T13:37:55.596474+0000","last_undegraded":"2026-03-10T13:37:55.596474+0000","last_fullsized":"2026-03-10T13:37:55.596474+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:12:26.771941+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"58'12","reported_seq":46,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.299010+0000","last_change":"2026-03-10T13:37:50.279840+0000","last_active":"2026-03-10T13:37:56.299010+0000","last_peered":"2026-03-10T13:37:56.299010+0000","last_clean":"2026-03-10T13:37:56.299010+0000","last_became_active":"2026-03-10T13:37:50.279746+0000","last_became_peered":"2026-03-10T13:37:50.279746+0000","last_unstale":"2026-03-10T13:37:56.299010+0000","last_undegraded":"2026-03-10T13:37:56.299010+0000","last_fullsized":"2026-03-10T13:37:56.299010+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:28:32.598870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596323+0000","last_change":"2026-03-10T13:37:52.274977+0000","last_active":"2026-03-10T13:37:55.596323+0000","last_peered":"2026-03-10T13:37:55.596323+0000","last_clean":"2026-03-10T13:37:55.596323+0000","last_became_active":"2026-03-10T13:37:52.274630+0000","last_became_peered":"2026-03-10T13:37:52.274630+0000","last_unstale":"2026-03-10T13:37:55.596323+0000","last_undegraded":"2026-03-10T13:37:55.596323+0000","last_fullsized":"2026-03-10T13:37:55.596323+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:55:22.186174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"58'1","reported_seq":16,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595245+0000","last_change":"2026-03-10T13:37:54.369275+0000","last_active":"2026-03-10T13:37:55.595245+0000","last_peered":"2026-03-10T13:37:55.595245+0000","last_clean":"2026-03-10T13:37:55.595245+0000","last_became_active":"2026-03-10T13:37:54.369142+0000","last_became_peered":"2026-03-10T13:37:54.369142+0000","last_unstale":"2026-03-10T13:37:55.595245+0000","last_undegraded":"2026-03-10T13:37:55.595245+0000","last_fullsized":"2026-03-10T13:37:55.595245+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:57.927606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"50'1","reported_seq":28,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617351+0000","last_change":"2026-03-10T13:37:48.260445+0000","last_active":"2026-03-10T13:37:55.617351+0000","last_peered":"2026-03-10T13:37:55.617351+0000","last_clean":"2026-03-10T13:37:55.617351+0000","last_became_active":"2026-03-10T13:37:48.260209+0000","last_became_peered":"2026-03-10T13:37:48.260209+0000","last_unstale":"2026-03-10T13:37:55.617351+0000","last_undegraded":"2026-03-10T13:37:55.617351+0000","last_fullsized":"2026-03-10T13:37:55.617351+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:17:07.458536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.279542+0000","last_change":"2026-03-10T13:37:50.275138+0000","last_active":"2026-03-10T13:37:56.279542+0000","last_peered":"2026-03-10T13:37:56.279542+0000","last_clean":"2026-03-10T13:37:56.279542+0000","last_became_active":"2026-03-10T13:37:50.274796+0000","last_became_peered":"2026-03-10T13:37:50.274796+0000","last_unstale":"2026-03-10T13:37:56.279542+0000","last_undegraded":"2026-03-10T13:37:56.279542+0000","last_fullsized":"2026-03-10T13:37:56.279542+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:23:31.612371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"58'8","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.755821+0000","last_change":"2026-03-10T13:37:52.275739+0000","last_active":"2026-03-10T13:37:55.755821+0000","last_peered":"2026-03-10T13:37:55.755821+0000","last_clean":"2026-03-10T13:37:55.755821+0000","last_became_active":"2026-03-10T13:37:52.275525+0000","last_became_peered":"2026-03-10T13:37:52.275525+0000","last_unstale":"2026-03-10T13:37:55.755821+0000","last_undegraded":"2026-03-10T13:37:55.755821+0000","last_fullsized":"2026-03-10T13:37:55.755821+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:55:06.765772+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.309345+0000","last_change":"2026-03-10T13:37:50.285322+0000","last_active":"2026-03-10T13:37:56.309345+0000","last_peered":"2026-03-10T13:37:56.309345+0000","last_clean":"2026-03-10T13:37:56.309345+0000","last_became_active":"2026-03-10T13:37:50.284883+0000","last_became_peered":"2026-03-10T13:37:50.284883+0000","last_unstale":"2026-03-10T13:37:56.309345+0000","last_undegraded":"2026-03-10T13:37:56.309345+0000","last_fullsized":"2026-03-10T13:37:56.309345+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:04:04.846287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603009+0000","last_change":"2026-03-10T13:37:48.245427+0000","last_active":"2026-03-10T13:37:55.603009+0000","last_peered":"2026-03-10T13:37:55.603009+0000","last_clean":"2026-03-10T13:37:55.603009+0000","last_became_active":"2026-03-10T13:37:48.245298+0000","last_became_peered":"2026-03-10T13:37:48.245298+0000","last_unstale":"2026-03-10T13:37:55.603009+0000","last_undegraded":"2026-03-10T13:37:55.603009+0000","last_fullsized":"2026-03-10T13:37:55.603009+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:15:19.145024+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.779232+0000","last_change":"2026-03-10T13:37:52.289819+0000","last_active":"2026-03-10T13:37:55.779232+0000","last_peered":"2026-03-10T13:37:55.779232+0000","last_clean":"2026-03-10T13:37:55.779232+0000","last_became_active":"2026-03-10T13:37:52.289670+0000","last_became_peered":"2026-03-10T13:37:52.289670+0000","last_unstale":"2026-03-10T13:37:55.779232+0000","last_undegraded":"2026-03-10T13:37:55.779232+0000","last_fullsized":"2026-03-10T13:37:55.779232+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:56:44.344376+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629803+0000","last_change":"2026-03-10T13:37:54.369883+0000","last_active":"2026-03-10T13:37:55.629803+0000","last_peered":"2026-03-10T13:37:55.629803+0000","last_clean":"2026-03-10T13:37:55.629803+0000","last_became_active":"2026-03-10T13:37:54.369793+0000","last_became_peered":"2026-03-10T13:37:54.369793+0000","last_unstale":"2026-03-10T13:37:55.629803+0000","last_undegraded":"2026-03-10T13:37:55.629803+0000","last_fullsized":"2026-03-10T13:37:55.629803+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:15:25.187079+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"58'18","reported_seq":55,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.212989+0000","last_change":"2026-03-10T13:37:50.275279+0000","last_active":"2026-03-10T13:37:56.212989+0000","last_peered":"2026-03-10T13:37:56.212989+0000","last_clean":"2026-03-10T13:37:56.212989+0000","last_became_active":"2026-03-10T13:37:50.274378+0000","last_became_peered":"2026-03-10T13:37:50.274378+0000","last_unstale":"2026-03-10T13:37:56.212989+0000","last_undegraded":"2026-03-10T13:37:56.212989+0000","last_fullsized":"2026-03-10T13:37:56.212989+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:01:52.730993+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603165+0000","last_change":"2026-03-10T13:37:48.255517+0000","last_active":"2026-03-10T13:37:55.603165+0000","last_peered":"2026-03-10T13:37:55.603165+0000","last_clean":"2026-03-10T13:37:55.603165+0000","last_became_active":"2026-03-10T13:37:48.255409+0000","last_became_peered":"2026-03-10T13:37:48.255409+0000","last_unstale":"2026-03-10T13:37:55.603165+0000","last_undegraded":"2026-03-10T13:37:55.603165+0000","last_fullsized":"2026-03-10T13:37:55.603165+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:43.978621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595954+0000","last_change":"2026-03-10T13:37:52.279050+0000","last_active":"2026-03-10T13:37:55.595954+0000","last_peered":"2026-03-10T13:37:55.595954+0000","last_clean":"2026-03-10T13:37:55.595954+0000","last_became_active":"2026-03-10T13:37:52.278977+0000","last_became_peered":"2026-03-10T13:37:52.278977+0000","last_unstale":"2026-03-10T13:37:55.595954+0000","last_undegraded":"2026-03-10T13:37:55.595954+0000","last_fullsized":"2026-03-10T13:37:55.595954+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:25:25.421952+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595930+0000","last_change":"2026-03-10T13:37:54.363185+0000","last_active":"2026-03-10T13:37:55.595930+0000","last_peered":"2026-03-10T13:37:55.595930+0000","last_clean":"2026-03-10T13:37:55.595930+0000","last_became_active":"2026-03-10T13:37:54.362918+0000","last_became_peered":"2026-03-10T13:37:54.362918+0000","last_unstale":"2026-03-10T13:37:55.595930+0000","last_undegraded":"2026-03-10T13:37:55.595930+0000","last_fullsized":"2026-03-10T13:37:55.595930+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:06:10.006331+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"58'14","reported_seq":44,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.282771+0000","last_change":"2026-03-10T13:37:50.271662+0000","last_active":"2026-03-10T13:37:56.282771+0000","last_peered":"2026-03-10T13:37:56.282771+0000","last_clean":"2026-03-10T13:37:56.282771+0000","last_became_active":"2026-03-10T13:37:50.270857+0000","last_became_peered":"2026-03-10T13:37:50.270857+0000","last_unstale":"2026-03-10T13:37:56.282771+0000","last_undegraded":"2026-03-10T13:37:56.282771+0000","last_fullsized":"2026-03-10T13:37:56.282771+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:14:40.676561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"50'1","reported_seq":28,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605340+0000","last_change":"2026-03-10T13:37:48.266452+0000","last_active":"2026-03-10T13:37:55.605340+0000","last_peered":"2026-03-10T13:37:55.605340+0000","last_clean":"2026-03-10T13:37:55.605340+0000","last_became_active":"2026-03-10T13:37:48.266368+0000","last_became_peered":"2026-03-10T13:37:48.266368+0000","last_unstale":"2026-03-10T13:37:55.605340+0000","last_undegraded":"2026-03-10T13:37:55.605340+0000","last_fullsized":"2026-03-10T13:37:55.605340+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:00:08.857999+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"58'8","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.768443+0000","last_change":"2026-03-10T13:37:52.274749+0000","last_active":"2026-03-10T13:37:55.768443+0000","last_peered":"2026-03-10T13:37:55.768443+0000","last_clean":"2026-03-10T13:37:55.768443+0000","last_became_active":"2026-03-10T13:37:52.274618+0000","last_became_peered":"2026-03-10T13:37:52.274618+0000","last_unstale":"2026-03-10T13:37:55.768443+0000","last_undegraded":"2026-03-10T13:37:55.768443+0000","last_fullsized":"2026-03-10T13:37:55.768443+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:22:32.870258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594322+0000","last_change":"2026-03-10T13:37:54.384659+0000","last_active":"2026-03-10T13:37:55.594322+0000","last_peered":"2026-03-10T13:37:55.594322+0000","last_clean":"2026-03-10T13:37:55.594322+0000","last_became_active":"2026-03-10T13:37:54.384493+0000","last_became_peered":"2026-03-10T13:37:54.384493+0000","last_unstale":"2026-03-10T13:37:55.594322+0000","last_undegraded":"2026-03-10T13:37:55.594322+0000","last_fullsized":"2026-03-10T13:37:55.594322+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:21:40.276801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.257726+0000","last_change":"2026-03-10T13:37:50.285154+0000","last_active":"2026-03-10T13:37:56.257726+0000","last_peered":"2026-03-10T13:37:56.257726+0000","last_clean":"2026-03-10T13:37:56.257726+0000","last_became_active":"2026-03-10T13:37:50.284994+0000","last_became_peered":"2026-03-10T13:37:50.284994+0000","last_unstale":"2026-03-10T13:37:56.257726+0000","last_undegraded":"2026-03-10T13:37:56.257726+0000","last_fullsized":"2026-03-10T13:37:56.257726+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:47.061816+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629399+0000","last_change":"2026-03-10T13:37:48.247954+0000","last_active":"2026-03-10T13:37:55.629399+0000","last_peered":"2026-03-10T13:37:55.629399+0000","last_clean":"2026-03-10T13:37:55.629399+0000","last_became_active":"2026-03-10T13:37:48.247663+0000","last_became_peered":"2026-03-10T13:37:48.247663+0000","last_unstale":"2026-03-10T13:37:55.629399+0000","last_undegraded":"2026-03-10T13:37:55.629399+0000","last_fullsized":"2026-03-10T13:37:55.629399+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:47:43.474018+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.781493+0000","last_change":"2026-03-10T13:37:52.287296+0000","last_active":"2026-03-10T13:37:55.781493+0000","last_peered":"2026-03-10T13:37:55.781493+0000","last_clean":"2026-03-10T13:37:55.781493+0000","last_became_active":"2026-03-10T13:37:52.287120+0000","last_became_peered":"2026-03-10T13:37:52.287120+0000","last_unstale":"2026-03-10T13:37:55.781493+0000","last_undegraded":"2026-03-10T13:37:55.781493+0000","last_fullsized":"2026-03-10T13:37:55.781493+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:38.579724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605887+0000","last_change":"2026-03-10T13:37:54.377432+0000","last_active":"2026-03-10T13:37:55.605887+0000","last_peered":"2026-03-10T13:37:55.605887+0000","last_clean":"2026-03-10T13:37:55.605887+0000","last_became_active":"2026-03-10T13:37:54.377248+0000","last_became_peered":"2026-03-10T13:37:54.377248+0000","last_unstale":"2026-03-10T13:37:55.605887+0000","last_undegraded":"2026-03-10T13:37:55.605887+0000","last_fullsized":"2026-03-10T13:37:55.605887+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:09:25.942919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"58'19","reported_seq":59,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.227456+0000","last_change":"2026-03-10T13:37:50.274855+0000","last_active":"2026-03-10T13:37:56.227456+0000","last_peered":"2026-03-10T13:37:56.227456+0000","last_clean":"2026-03-10T13:37:56.227456+0000","last_became_active":"2026-03-10T13:37:50.274520+0000","last_became_peered":"2026-03-10T13:37:50.274520+0000","last_unstale":"2026-03-10T13:37:56.227456+0000","last_undegraded":"2026-03-10T13:37:56.227456+0000","last_fullsized":"2026-03-10T13:37:56.227456+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:52:41.043795+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.620648+0000","last_change":"2026-03-10T13:37:48.244071+0000","last_active":"2026-03-10T13:37:55.620648+0000","last_peered":"2026-03-10T13:37:55.620648+0000","last_clean":"2026-03-10T13:37:55.620648+0000","last_became_active":"2026-03-10T13:37:48.243884+0000","last_became_peered":"2026-03-10T13:37:48.243884+0000","last_unstale":"2026-03-10T13:37:55.620648+0000","last_undegraded":"2026-03-10T13:37:55.620648+0000","last_fullsized":"2026-03-10T13:37:55.620648+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:30:00.549951+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.637294+0000","last_change":"2026-03-10T13:37:52.285298+0000","last_active":"2026-03-10T13:37:55.637294+0000","last_peered":"2026-03-10T13:37:55.637294+0000","last_clean":"2026-03-10T13:37:55.637294+0000","last_became_active":"2026-03-10T13:37:52.285188+0000","last_became_peered":"2026-03-10T13:37:52.285188+0000","last_unstale":"2026-03-10T13:37:55.637294+0000","last_undegraded":"2026-03-10T13:37:55.637294+0000","last_fullsized":"2026-03-10T13:37:55.637294+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:01:28.645328+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617243+0000","last_change":"2026-03-10T13:37:54.383297+0000","last_active":"2026-03-10T13:37:55.617243+0000","last_peered":"2026-03-10T13:37:55.617243+0000","last_clean":"2026-03-10T13:37:55.617243+0000","last_became_active":"2026-03-10T13:37:54.383129+0000","last_became_peered":"2026-03-10T13:37:54.383129+0000","last_unstale":"2026-03-10T13:37:55.617243+0000","last_undegraded":"2026-03-10T13:37:55.617243+0000","last_fullsized":"2026-03-10T13:37:55.617243+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:13:30.685000+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"58'28","reported_seq":74,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.267092+0000","last_change":"2026-03-10T13:37:50.285805+0000","last_active":"2026-03-10T13:37:56.267092+0000","last_peered":"2026-03-10T13:37:56.267092+0000","last_clean":"2026-03-10T13:37:56.267092+0000","last_became_active":"2026-03-10T13:37:50.285723+0000","last_became_peered":"2026-03-10T13:37:50.285723+0000","last_unstale":"2026-03-10T13:37:56.267092+0000","last_undegraded":"2026-03-10T13:37:56.267092+0000","last_fullsized":"2026-03-10T13:37:56.267092+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:19:04.818425+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596599+0000","last_change":"2026-03-10T13:37:48.260328+0000","last_active":"2026-03-10T13:37:55.596599+0000","last_peered":"2026-03-10T13:37:55.596599+0000","last_clean":"2026-03-10T13:37:55.596599+0000","last_became_active":"2026-03-10T13:37:48.260187+0000","last_became_peered":"2026-03-10T13:37:48.260187+0000","last_unstale":"2026-03-10T13:37:55.596599+0000","last_undegraded":"2026-03-10T13:37:55.596599+0000","last_fullsized":"2026-03-10T13:37:55.596599+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:02:20.935175+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"52'2","reported_seq":34,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629469+0000","last_change":"2026-03-10T13:37:50.258883+0000","last_active":"2026-03-10T13:37:55.629469+0000","last_peered":"2026-03-10T13:37:55.629469+0000","last_clean":"2026-03-10T13:37:55.629469+0000","last_became_active":"2026-03-10T13:37:48.244175+0000","last_became_peered":"2026-03-10T13:37:48.244175+0000","last_unstale":"2026-03-10T13:37:55.629469+0000","last_undegraded":"2026-03-10T13:37:55.629469+0000","last_fullsized":"2026-03-10T13:37:55.629469+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:11:44.460102+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00086743300000000003,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605684+0000","last_change":"2026-03-10T13:37:52.337814+0000","last_active":"2026-03-10T13:37:55.605684+0000","last_peered":"2026-03-10T13:37:55.605684+0000","last_clean":"2026-03-10T13:37:55.605684+0000","last_became_active":"2026-03-10T13:37:52.337719+0000","last_became_peered":"2026-03-10T13:37:52.337719+0000","last_unstale":"2026-03-10T13:37:55.605684+0000","last_undegraded":"2026-03-10T13:37:55.605684+0000","last_fullsized":"2026-03-10T13:37:55.605684+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:31:18.102603+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602605+0000","last_change":"2026-03-10T13:37:54.361344+0000","last_active":"2026-03-10T13:37:55.602605+0000","last_peered":"2026-03-10T13:37:55.602605+0000","last_clean":"2026-03-10T13:37:55.602605+0000","last_became_active":"2026-03-10T13:37:54.361141+0000","last_became_peered":"2026-03-10T13:37:54.361141+0000","last_unstale":"2026-03-10T13:37:55.602605+0000","last_undegraded":"2026-03-10T13:37:55.602605+0000","last_fullsized":"2026-03-10T13:37:55.602605+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:24:48.534009+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"58'13","reported_seq":50,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.163793+0000","last_change":"2026-03-10T13:37:50.283610+0000","last_active":"2026-03-10T13:37:56.163793+0000","last_peered":"2026-03-10T13:37:56.163793+0000","last_clean":"2026-03-10T13:37:56.163793+0000","last_became_active":"2026-03-10T13:37:50.279315+0000","last_became_peered":"2026-03-10T13:37:50.279315+0000","last_unstale":"2026-03-10T13:37:56.163793+0000","last_undegraded":"2026-03-10T13:37:56.163793+0000","last_fullsized":"2026-03-10T13:37:56.163793+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:12:19.738249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617269+0000","last_change":"2026-03-10T13:37:48.243979+0000","last_active":"2026-03-10T13:37:55.617269+0000","last_peered":"2026-03-10T13:37:55.617269+0000","last_clean":"2026-03-10T13:37:55.617269+0000","last_became_active":"2026-03-10T13:37:48.243687+0000","last_became_peered":"2026-03-10T13:37:48.243687+0000","last_unstale":"2026-03-10T13:37:55.617269+0000","last_undegraded":"2026-03-10T13:37:55.617269+0000","last_fullsized":"2026-03-10T13:37:55.617269+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:36:56.568375+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674034+0000","last_change":"2026-03-10T13:37:50.255940+0000","last_active":"2026-03-10T13:37:55.674034+0000","last_peered":"2026-03-10T13:37:55.674034+0000","last_clean":"2026-03-10T13:37:55.674034+0000","last_became_active":"2026-03-10T13:37:48.248249+0000","last_became_peered":"2026-03-10T13:37:48.248249+0000","last_unstale":"2026-03-10T13:37:55.674034+0000","last_undegraded":"2026-03-10T13:37:55.674034+0000","last_fullsized":"2026-03-10T13:37:55.674034+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:55:01.231709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00053989999999999995,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674095+0000","last_change":"2026-03-10T13:37:52.286245+0000","last_active":"2026-03-10T13:37:55.674095+0000","last_peered":"2026-03-10T13:37:55.674095+0000","last_clean":"2026-03-10T13:37:55.674095+0000","last_became_active":"2026-03-10T13:37:52.285902+0000","last_became_peered":"2026-03-10T13:37:52.285902+0000","last_unstale":"2026-03-10T13:37:55.674095+0000","last_undegraded":"2026-03-10T13:37:55.674095+0000","last_fullsized":"2026-03-10T13:37:55.674095+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:57:13.600272+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594409+0000","last_change":"2026-03-10T13:37:54.384692+0000","last_active":"2026-03-10T13:37:55.594409+0000","last_peered":"2026-03-10T13:37:55.594409+0000","last_clean":"2026-03-10T13:37:55.594409+0000","last_became_active":"2026-03-10T13:37:54.384612+0000","last_became_peered":"2026-03-10T13:37:54.384612+0000","last_unstale":"2026-03-10T13:37:55.594409+0000","last_undegraded":"2026-03-10T13:37:55.594409+0000","last_fullsized":"2026-03-10T13:37:55.594409+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:36:48.828084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"58'12","reported_seq":41,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.178524+0000","last_change":"2026-03-10T13:37:50.278213+0000","last_active":"2026-03-10T13:37:56.178524+0000","last_peered":"2026-03-10T13:37:56.178524+0000","last_clean":"2026-03-10T13:37:56.178524+0000","last_became_active":"2026-03-10T13:37:50.278032+0000","last_became_peered":"2026-03-10T13:37:50.278032+0000","last_unstale":"2026-03-10T13:37:56.178524+0000","last_undegraded":"2026-03-10T13:37:56.178524+0000","last_fullsized":"2026-03-10T13:37:56.178524+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:08.519664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605384+0000","last_change":"2026-03-10T13:37:48.263540+0000","last_active":"2026-03-10T13:37:55.605384+0000","last_peered":"2026-03-10T13:37:55.605384+0000","last_clean":"2026-03-10T13:37:55.605384+0000","last_became_active":"2026-03-10T13:37:48.263445+0000","last_became_peered":"2026-03-10T13:37:48.263445+0000","last_unstale":"2026-03-10T13:37:55.605384+0000","last_undegraded":"2026-03-10T13:37:55.605384+0000","last_fullsized":"2026-03-10T13:37:55.605384+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:41:50.127741+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"58'5","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:57.259025+0000","last_change":"2026-03-10T13:37:50.259062+0000","last_active":"2026-03-10T13:37:57.259025+0000","last_peered":"2026-03-10T13:37:57.259025+0000","last_clean":"2026-03-10T13:37:57.259025+0000","last_became_active":"2026-03-10T13:37:48.254460+0000","last_became_peered":"2026-03-10T13:37:48.254460+0000","last_unstale":"2026-03-10T13:37:57.259025+0000","last_undegraded":"2026-03-10T13:37:57.259025+0000","last_fullsized":"2026-03-10T13:37:57.259025+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:10:36.108168+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00078135200000000002,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":2,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629172+0000","last_change":"2026-03-10T13:37:52.338874+0000","last_active":"2026-03-10T13:37:55.629172+0000","last_peered":"2026-03-10T13:37:55.629172+0000","last_clean":"2026-03-10T13:37:55.629172+0000","last_became_active":"2026-03-10T13:37:52.338525+0000","last_became_peered":"2026-03-10T13:37:52.338525+0000","last_unstale":"2026-03-10T13:37:55.629172+0000","last_undegraded":"2026-03-10T13:37:55.629172+0000","last_fullsized":"2026-03-10T13:37:55.629172+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:43:56.051905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617297+0000","last_change":"2026-03-10T13:37:54.362690+0000","last_active":"2026-03-10T13:37:55.617297+0000","last_peered":"2026-03-10T13:37:55.617297+0000","last_clean":"2026-03-10T13:37:55.617297+0000","last_became_active":"2026-03-10T13:37:54.362554+0000","last_became_peered":"2026-03-10T13:37:54.362554+0000","last_unstale":"2026-03-10T13:37:55.617297+0000","last_undegraded":"2026-03-10T13:37:55.617297+0000","last_fullsized":"2026-03-10T13:37:55.617297+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:09:32.394329+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"58'16","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.271005+0000","last_change":"2026-03-10T13:37:50.268616+0000","last_active":"2026-03-10T13:37:56.271005+0000","last_peered":"2026-03-10T13:37:56.271005+0000","last_clean":"2026-03-10T13:37:56.271005+0000","last_became_active":"2026-03-10T13:37:50.268372+0000","last_became_peered":"2026-03-10T13:37:50.268372+0000","last_unstale":"2026-03-10T13:37:56.271005+0000","last_undegraded":"2026-03-10T13:37:56.271005+0000","last_fullsized":"2026-03-10T13:37:56.271005+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:15:44.097040+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603078+0000","last_change":"2026-03-10T13:37:48.245907+0000","last_active":"2026-03-10T13:37:55.603078+0000","last_peered":"2026-03-10T13:37:55.603078+0000","last_clean":"2026-03-10T13:37:55.603078+0000","last_became_active":"2026-03-10T13:37:48.245814+0000","last_became_peered":"2026-03-10T13:37:48.245814+0000","last_unstale":"2026-03-10T13:37:55.603078+0000","last_undegraded":"2026-03-10T13:37:55.603078+0000","last_fullsized":"2026-03-10T13:37:55.603078+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:09:23.223356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"20'32","reported_seq":37,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594968+0000","last_change":"2026-03-10T13:37:46.464837+0000","last_active":"2026-03-10T13:37:55.594968+0000","last_peered":"2026-03-10T13:37:55.594968+0000","last_clean":"2026-03-10T13:37:55.594968+0000","last_became_active":"2026-03-10T13:37:46.156521+0000","last_became_peered":"2026-03-10T13:37:46.156521+0000","last_unstale":"2026-03-10T13:37:55.594968+0000","last_undegraded":"2026-03-10T13:37:55.594968+0000","last_fullsized":"2026-03-10T13:37:55.594968+0000","mapping_epoch":47,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":48,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:36:50.964361+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:36:50.964361+0000","last_clean_scrub_stamp":"2026-03-10T13:36:50.964361+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:13:13.823806+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595007+0000","last_change":"2026-03-10T13:37:52.289669+0000","last_active":"2026-03-10T13:37:55.595007+0000","last_peered":"2026-03-10T13:37:55.595007+0000","last_clean":"2026-03-10T13:37:55.595007+0000","last_became_active":"2026-03-10T13:37:52.289484+0000","last_became_peered":"2026-03-10T13:37:52.289484+0000","last_unstale":"2026-03-10T13:37:55.595007+0000","last_undegraded":"2026-03-10T13:37:55.595007+0000","last_fullsized":"2026-03-10T13:37:55.595007+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:28:32.834254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628972+0000","last_change":"2026-03-10T13:37:54.362550+0000","last_active":"2026-03-10T13:37:55.628972+0000","last_peered":"2026-03-10T13:37:55.628972+0000","last_clean":"2026-03-10T13:37:55.628972+0000","last_became_active":"2026-03-10T13:37:54.361963+0000","last_became_peered":"2026-03-10T13:37:54.361963+0000","last_unstale":"2026-03-10T13:37:55.628972+0000","last_undegraded":"2026-03-10T13:37:55.628972+0000","last_fullsized":"2026-03-10T13:37:55.628972+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:00:38.591531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.276478+0000","last_change":"2026-03-10T13:37:50.283275+0000","last_active":"2026-03-10T13:37:56.276478+0000","last_peered":"2026-03-10T13:37:56.276478+0000","last_clean":"2026-03-10T13:37:56.276478+0000","last_became_active":"2026-03-10T13:37:50.283053+0000","last_became_peered":"2026-03-10T13:37:50.283053+0000","last_unstale":"2026-03-10T13:37:56.276478+0000","last_undegraded":"2026-03-10T13:37:56.276478+0000","last_fullsized":"2026-03-10T13:37:56.276478+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:35:21.514199+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596433+0000","last_change":"2026-03-10T13:37:48.257892+0000","last_active":"2026-03-10T13:37:55.596433+0000","last_peered":"2026-03-10T13:37:55.596433+0000","last_clean":"2026-03-10T13:37:55.596433+0000","last_became_active":"2026-03-10T13:37:48.257821+0000","last_became_peered":"2026-03-10T13:37:48.257821+0000","last_unstale":"2026-03-10T13:37:55.596433+0000","last_undegraded":"2026-03-10T13:37:55.596433+0000","last_fullsized":"2026-03-10T13:37:55.596433+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:34:33.136489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629688+0000","last_change":"2026-03-10T13:37:52.335981+0000","last_active":"2026-03-10T13:37:55.629688+0000","last_peered":"2026-03-10T13:37:55.629688+0000","last_clean":"2026-03-10T13:37:55.629688+0000","last_became_active":"2026-03-10T13:37:52.335837+0000","last_became_peered":"2026-03-10T13:37:52.335837+0000","last_unstale":"2026-03-10T13:37:55.629688+0000","last_undegraded":"2026-03-10T13:37:55.629688+0000","last_fullsized":"2026-03-10T13:37:55.629688+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:40:03.789147+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602498+0000","last_change":"2026-03-10T13:37:54.385071+0000","last_active":"2026-03-10T13:37:55.602498+0000","last_peered":"2026-03-10T13:37:55.602498+0000","last_clean":"2026-03-10T13:37:55.602498+0000","last_became_active":"2026-03-10T13:37:54.384190+0000","last_became_peered":"2026-03-10T13:37:54.384190+0000","last_unstale":"2026-03-10T13:37:55.602498+0000","last_undegraded":"2026-03-10T13:37:55.602498+0000","last_fullsized":"2026-03-10T13:37:55.602498+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:44:44.984953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"58'17","reported_seq":51,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.273132+0000","last_change":"2026-03-10T13:37:50.295146+0000","last_active":"2026-03-10T13:37:56.273132+0000","last_peered":"2026-03-10T13:37:56.273132+0000","last_clean":"2026-03-10T13:37:56.273132+0000","last_became_active":"2026-03-10T13:37:50.294144+0000","last_became_peered":"2026-03-10T13:37:50.294144+0000","last_unstale":"2026-03-10T13:37:56.273132+0000","last_undegraded":"2026-03-10T13:37:56.273132+0000","last_fullsized":"2026-03-10T13:37:56.273132+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:38:13.212439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.636805+0000","last_change":"2026-03-10T13:37:48.262800+0000","last_active":"2026-03-10T13:37:55.636805+0000","last_peered":"2026-03-10T13:37:55.636805+0000","last_clean":"2026-03-10T13:37:55.636805+0000","last_became_active":"2026-03-10T13:37:48.262566+0000","last_became_peered":"2026-03-10T13:37:48.262566+0000","last_unstale":"2026-03-10T13:37:55.636805+0000","last_undegraded":"2026-03-10T13:37:55.636805+0000","last_fullsized":"2026-03-10T13:37:55.636805+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:06:07.061463+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617733+0000","last_change":"2026-03-10T13:37:52.288990+0000","last_active":"2026-03-10T13:37:55.617733+0000","last_peered":"2026-03-10T13:37:55.617733+0000","last_clean":"2026-03-10T13:37:55.617733+0000","last_became_active":"2026-03-10T13:37:52.288809+0000","last_became_peered":"2026-03-10T13:37:52.288809+0000","last_unstale":"2026-03-10T13:37:55.617733+0000","last_undegraded":"2026-03-10T13:37:55.617733+0000","last_fullsized":"2026-03-10T13:37:55.617733+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:02:01.131903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674131+0000","last_change":"2026-03-10T13:37:54.368034+0000","last_active":"2026-03-10T13:37:55.674131+0000","last_peered":"2026-03-10T13:37:55.674131+0000","last_clean":"2026-03-10T13:37:55.674131+0000","last_became_active":"2026-03-10T13:37:54.367917+0000","last_became_peered":"2026-03-10T13:37:54.367917+0000","last_unstale":"2026-03-10T13:37:55.674131+0000","last_undegraded":"2026-03-10T13:37:55.674131+0000","last_fullsized":"2026-03-10T13:37:55.674131+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:47:19.763693+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.193827+0000","last_change":"2026-03-10T13:37:50.276348+0000","last_active":"2026-03-10T13:37:56.193827+0000","last_peered":"2026-03-10T13:37:56.193827+0000","last_clean":"2026-03-10T13:37:56.193827+0000","last_became_active":"2026-03-10T13:37:50.273423+0000","last_became_peered":"2026-03-10T13:37:50.273423+0000","last_unstale":"2026-03-10T13:37:56.193827+0000","last_undegraded":"2026-03-10T13:37:56.193827+0000","last_fullsized":"2026-03-10T13:37:56.193827+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:42.040423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602968+0000","last_change":"2026-03-10T13:37:48.259720+0000","last_active":"2026-03-10T13:37:55.602968+0000","last_peered":"2026-03-10T13:37:55.602968+0000","last_clean":"2026-03-10T13:37:55.602968+0000","last_became_active":"2026-03-10T13:37:48.259600+0000","last_became_peered":"2026-03-10T13:37:48.259600+0000","last_unstale":"2026-03-10T13:37:55.602968+0000","last_undegraded":"2026-03-10T13:37:55.602968+0000","last_fullsized":"2026-03-10T13:37:55.602968+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:39.981466+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.780243+0000","last_change":"2026-03-10T13:37:52.286033+0000","last_active":"2026-03-10T13:37:55.780243+0000","last_peered":"2026-03-10T13:37:55.780243+0000","last_clean":"2026-03-10T13:37:55.780243+0000","last_became_active":"2026-03-10T13:37:52.285743+0000","last_became_peered":"2026-03-10T13:37:52.285743+0000","last_unstale":"2026-03-10T13:37:55.780243+0000","last_undegraded":"2026-03-10T13:37:55.780243+0000","last_fullsized":"2026-03-10T13:37:55.780243+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:21:56.418901+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596041+0000","last_change":"2026-03-10T13:37:54.348661+0000","last_active":"2026-03-10T13:37:55.596041+0000","last_peered":"2026-03-10T13:37:55.596041+0000","last_clean":"2026-03-10T13:37:55.596041+0000","last_became_active":"2026-03-10T13:37:54.348583+0000","last_became_peered":"2026-03-10T13:37:54.348583+0000","last_unstale":"2026-03-10T13:37:55.596041+0000","last_undegraded":"2026-03-10T13:37:55.596041+0000","last_fullsized":"2026-03-10T13:37:55.596041+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:05.306147+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.222035+0000","last_change":"2026-03-10T13:37:50.278281+0000","last_active":"2026-03-10T13:37:56.222035+0000","last_peered":"2026-03-10T13:37:56.222035+0000","last_clean":"2026-03-10T13:37:56.222035+0000","last_became_active":"2026-03-10T13:37:50.278147+0000","last_became_peered":"2026-03-10T13:37:50.278147+0000","last_unstale":"2026-03-10T13:37:56.222035+0000","last_undegraded":"2026-03-10T13:37:56.222035+0000","last_fullsized":"2026-03-10T13:37:56.222035+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:52.781786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629307+0000","last_change":"2026-03-10T13:37:48.248020+0000","last_active":"2026-03-10T13:37:55.629307+0000","last_peered":"2026-03-10T13:37:55.629307+0000","last_clean":"2026-03-10T13:37:55.629307+0000","last_became_active":"2026-03-10T13:37:48.247807+0000","last_became_peered":"2026-03-10T13:37:48.247807+0000","last_unstale":"2026-03-10T13:37:55.629307+0000","last_undegraded":"2026-03-10T13:37:55.629307+0000","last_fullsized":"2026-03-10T13:37:55.629307+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:06:33.699352+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674311+0000","last_change":"2026-03-10T13:37:52.274397+0000","last_active":"2026-03-10T13:37:55.674311+0000","last_peered":"2026-03-10T13:37:55.674311+0000","last_clean":"2026-03-10T13:37:55.674311+0000","last_became_active":"2026-03-10T13:37:52.274279+0000","last_became_peered":"2026-03-10T13:37:52.274279+0000","last_unstale":"2026-03-10T13:37:55.674311+0000","last_undegraded":"2026-03-10T13:37:55.674311+0000","last_fullsized":"2026-03-10T13:37:55.674311+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:08:13.228403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605780+0000","last_change":"2026-03-10T13:37:54.369365+0000","last_active":"2026-03-10T13:37:55.605780+0000","last_peered":"2026-03-10T13:37:55.605780+0000","last_clean":"2026-03-10T13:37:55.605780+0000","last_became_active":"2026-03-10T13:37:54.369229+0000","last_became_peered":"2026-03-10T13:37:54.369229+0000","last_unstale":"2026-03-10T13:37:55.605780+0000","last_undegraded":"2026-03-10T13:37:55.605780+0000","last_fullsized":"2026-03-10T13:37:55.605780+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:59:39.836675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"58'19","reported_seq":54,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.306229+0000","last_change":"2026-03-10T13:37:50.280045+0000","last_active":"2026-03-10T13:37:56.306229+0000","last_peered":"2026-03-10T13:37:56.306229+0000","last_clean":"2026-03-10T13:37:56.306229+0000","last_became_active":"2026-03-10T13:37:50.279899+0000","last_became_peered":"2026-03-10T13:37:50.279899+0000","last_unstale":"2026-03-10T13:37:56.306229+0000","last_undegraded":"2026-03-10T13:37:56.306229+0000","last_fullsized":"2026-03-10T13:37:56.306229+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:30:39.883196+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594929+0000","last_change":"2026-03-10T13:37:48.243521+0000","last_active":"2026-03-10T13:37:55.594929+0000","last_peered":"2026-03-10T13:37:55.594929+0000","last_clean":"2026-03-10T13:37:55.594929+0000","last_became_active":"2026-03-10T13:37:48.243429+0000","last_became_peered":"2026-03-10T13:37:48.243429+0000","last_unstale":"2026-03-10T13:37:55.594929+0000","last_undegraded":"2026-03-10T13:37:55.594929+0000","last_fullsized":"2026-03-10T13:37:55.594929+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:48.610497+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674341+0000","last_change":"2026-03-10T13:37:52.286368+0000","last_active":"2026-03-10T13:37:55.674341+0000","last_peered":"2026-03-10T13:37:55.674341+0000","last_clean":"2026-03-10T13:37:55.674341+0000","last_became_active":"2026-03-10T13:37:52.286142+0000","last_became_peered":"2026-03-10T13:37:52.286142+0000","last_unstale":"2026-03-10T13:37:55.674341+0000","last_undegraded":"2026-03-10T13:37:55.674341+0000","last_fullsized":"2026-03-10T13:37:55.674341+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:41:41.893089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594907+0000","last_change":"2026-03-10T13:37:54.369328+0000","last_active":"2026-03-10T13:37:55.594907+0000","last_peered":"2026-03-10T13:37:55.594907+0000","last_clean":"2026-03-10T13:37:55.594907+0000","last_became_active":"2026-03-10T13:37:54.369158+0000","last_became_peered":"2026-03-10T13:37:54.369158+0000","last_unstale":"2026-03-10T13:37:55.594907+0000","last_undegraded":"2026-03-10T13:37:55.594907+0000","last_fullsized":"2026-03-10T13:37:55.594907+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:02:02.482736+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"58'12","reported_seq":46,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.990102+0000","last_change":"2026-03-10T13:37:50.283201+0000","last_active":"2026-03-10T13:37:55.990102+0000","last_peered":"2026-03-10T13:37:55.990102+0000","last_clean":"2026-03-10T13:37:55.990102+0000","last_became_active":"2026-03-10T13:37:50.282949+0000","last_became_peered":"2026-03-10T13:37:50.282949+0000","last_unstale":"2026-03-10T13:37:55.990102+0000","last_undegraded":"2026-03-10T13:37:55.990102+0000","last_fullsized":"2026-03-10T13:37:55.990102+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:06:55.563842+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594816+0000","last_change":"2026-03-10T13:37:48.257038+0000","last_active":"2026-03-10T13:37:55.594816+0000","last_peered":"2026-03-10T13:37:55.594816+0000","last_clean":"2026-03-10T13:37:55.594816+0000","last_became_active":"2026-03-10T13:37:48.256790+0000","last_became_peered":"2026-03-10T13:37:48.256790+0000","last_unstale":"2026-03-10T13:37:55.594816+0000","last_undegraded":"2026-03-10T13:37:55.594816+0000","last_fullsized":"2026-03-10T13:37:55.594816+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:23:30.297413+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674285+0000","last_change":"2026-03-10T13:37:52.284884+0000","last_active":"2026-03-10T13:37:55.674285+0000","last_peered":"2026-03-10T13:37:55.674285+0000","last_clean":"2026-03-10T13:37:55.674285+0000","last_became_active":"2026-03-10T13:37:52.284804+0000","last_became_peered":"2026-03-10T13:37:52.284804+0000","last_unstale":"2026-03-10T13:37:55.674285+0000","last_undegraded":"2026-03-10T13:37:55.674285+0000","last_fullsized":"2026-03-10T13:37:55.674285+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:37:38.943911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602702+0000","last_change":"2026-03-10T13:37:54.361277+0000","last_active":"2026-03-10T13:37:55.602702+0000","last_peered":"2026-03-10T13:37:55.602702+0000","last_clean":"2026-03-10T13:37:55.602702+0000","last_became_active":"2026-03-10T13:37:54.361029+0000","last_became_peered":"2026-03-10T13:37:54.361029+0000","last_unstale":"2026-03-10T13:37:55.602702+0000","last_undegraded":"2026-03-10T13:37:55.602702+0000","last_fullsized":"2026-03-10T13:37:55.602702+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:28:30.659041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.191145+0000","last_change":"2026-03-10T13:37:50.264100+0000","last_active":"2026-03-10T13:37:56.191145+0000","last_peered":"2026-03-10T13:37:56.191145+0000","last_clean":"2026-03-10T13:37:56.191145+0000","last_became_active":"2026-03-10T13:37:50.263960+0000","last_became_peered":"2026-03-10T13:37:50.263960+0000","last_unstale":"2026-03-10T13:37:56.191145+0000","last_undegraded":"2026-03-10T13:37:56.191145+0000","last_fullsized":"2026-03-10T13:37:56.191145+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:46:00.042637+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"50'2","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.651640+0000","last_change":"2026-03-10T13:37:48.257298+0000","last_active":"2026-03-10T13:37:55.651640+0000","last_peered":"2026-03-10T13:37:55.651640+0000","last_clean":"2026-03-10T13:37:55.651640+0000","last_became_active":"2026-03-10T13:37:48.256876+0000","last_became_peered":"2026-03-10T13:37:48.256876+0000","last_unstale":"2026-03-10T13:37:55.651640+0000","last_undegraded":"2026-03-10T13:37:55.651640+0000","last_fullsized":"2026-03-10T13:37:55.651640+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:55:18.391154+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.797846+0000","last_change":"2026-03-10T13:37:52.277458+0000","last_active":"2026-03-10T13:37:55.797846+0000","last_peered":"2026-03-10T13:37:55.797846+0000","last_clean":"2026-03-10T13:37:55.797846+0000","last_became_active":"2026-03-10T13:37:52.277380+0000","last_became_peered":"2026-03-10T13:37:52.277380+0000","last_unstale":"2026-03-10T13:37:55.797846+0000","last_undegraded":"2026-03-10T13:37:55.797846+0000","last_fullsized":"2026-03-10T13:37:55.797846+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:27.191094+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628738+0000","last_change":"2026-03-10T13:37:54.386443+0000","last_active":"2026-03-10T13:37:55.628738+0000","last_peered":"2026-03-10T13:37:55.628738+0000","last_clean":"2026-03-10T13:37:55.628738+0000","last_became_active":"2026-03-10T13:37:54.386047+0000","last_became_peered":"2026-03-10T13:37:54.386047+0000","last_unstale":"2026-03-10T13:37:55.628738+0000","last_undegraded":"2026-03-10T13:37:55.628738+0000","last_fullsized":"2026-03-10T13:37:55.628738+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:18:33.277833+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.636798+0000","last_change":"2026-03-10T13:37:48.251649+0000","last_active":"2026-03-10T13:37:55.636798+0000","last_peered":"2026-03-10T13:37:55.636798+0000","last_clean":"2026-03-10T13:37:55.636798+0000","last_became_active":"2026-03-10T13:37:48.251350+0000","last_became_peered":"2026-03-10T13:37:48.251350+0000","last_unstale":"2026-03-10T13:37:55.636798+0000","last_undegraded":"2026-03-10T13:37:55.636798+0000","last_fullsized":"2026-03-10T13:37:55.636798+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:49:22.184263+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"58'6","reported_seq":32,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.208395+0000","last_change":"2026-03-10T13:37:50.284861+0000","last_active":"2026-03-10T13:37:56.208395+0000","last_peered":"2026-03-10T13:37:56.208395+0000","last_clean":"2026-03-10T13:37:56.208395+0000","last_became_active":"2026-03-10T13:37:50.284618+0000","last_became_peered":"2026-03-10T13:37:50.284618+0000","last_unstale":"2026-03-10T13:37:56.208395+0000","last_undegraded":"2026-03-10T13:37:56.208395+0000","last_fullsized":"2026-03-10T13:37:56.208395+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:08:04.316010+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629350+0000","last_change":"2026-03-10T13:37:52.339052+0000","last_active":"2026-03-10T13:37:55.629350+0000","last_peered":"2026-03-10T13:37:55.629350+0000","last_clean":"2026-03-10T13:37:55.629350+0000","last_became_active":"2026-03-10T13:37:52.338634+0000","last_became_peered":"2026-03-10T13:37:52.338634+0000","last_unstale":"2026-03-10T13:37:55.629350+0000","last_undegraded":"2026-03-10T13:37:55.629350+0000","last_fullsized":"2026-03-10T13:37:55.629350+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:57:28.989203+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595190+0000","last_change":"2026-03-10T13:37:54.384464+0000","last_active":"2026-03-10T13:37:55.595190+0000","last_peered":"2026-03-10T13:37:55.595190+0000","last_clean":"2026-03-10T13:37:55.595190+0000","last_became_active":"2026-03-10T13:37:54.384376+0000","last_became_peered":"2026-03-10T13:37:54.384376+0000","last_unstale":"2026-03-10T13:37:55.595190+0000","last_undegraded":"2026-03-10T13:37:55.595190+0000","last_fullsized":"2026-03-10T13:37:55.595190+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:51:50.853700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.016298+0000","last_change":"2026-03-10T13:37:50.274920+0000","last_active":"2026-03-10T13:37:56.016298+0000","last_peered":"2026-03-10T13:37:56.016298+0000","last_clean":"2026-03-10T13:37:56.016298+0000","last_became_active":"2026-03-10T13:37:50.274656+0000","last_became_peered":"2026-03-10T13:37:50.274656+0000","last_unstale":"2026-03-10T13:37:56.016298+0000","last_undegraded":"2026-03-10T13:37:56.016298+0000","last_fullsized":"2026-03-10T13:37:56.016298+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:58:41.517370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594683+0000","last_change":"2026-03-10T13:37:48.265357+0000","last_active":"2026-03-10T13:37:55.594683+0000","last_peered":"2026-03-10T13:37:55.594683+0000","last_clean":"2026-03-10T13:37:55.594683+0000","last_became_active":"2026-03-10T13:37:48.264483+0000","last_became_peered":"2026-03-10T13:37:48.264483+0000","last_unstale":"2026-03-10T13:37:55.594683+0000","last_undegraded":"2026-03-10T13:37:55.594683+0000","last_fullsized":"2026-03-10T13:37:55.594683+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:36:39.093212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602745+0000","last_change":"2026-03-10T13:37:52.270195+0000","last_active":"2026-03-10T13:37:55.602745+0000","last_peered":"2026-03-10T13:37:55.602745+0000","last_clean":"2026-03-10T13:37:55.602745+0000","last_became_active":"2026-03-10T13:37:52.270078+0000","last_became_peered":"2026-03-10T13:37:52.270078+0000","last_unstale":"2026-03-10T13:37:55.602745+0000","last_undegraded":"2026-03-10T13:37:55.602745+0000","last_fullsized":"2026-03-10T13:37:55.602745+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:37:41.644438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.673753+0000","last_change":"2026-03-10T13:37:54.357080+0000","last_active":"2026-03-10T13:37:55.673753+0000","last_peered":"2026-03-10T13:37:55.673753+0000","last_clean":"2026-03-10T13:37:55.673753+0000","last_became_active":"2026-03-10T13:37:54.356988+0000","last_became_peered":"2026-03-10T13:37:54.356988+0000","last_unstale":"2026-03-10T13:37:55.673753+0000","last_undegraded":"2026-03-10T13:37:55.673753+0000","last_fullsized":"2026-03-10T13:37:55.673753+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:49:57.476414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.246001+0000","last_change":"2026-03-10T13:37:50.274156+0000","last_active":"2026-03-10T13:37:56.246001+0000","last_peered":"2026-03-10T13:37:56.246001+0000","last_clean":"2026-03-10T13:37:56.246001+0000","last_became_active":"2026-03-10T13:37:50.274059+0000","last_became_peered":"2026-03-10T13:37:50.274059+0000","last_unstale":"2026-03-10T13:37:56.246001+0000","last_undegraded":"2026-03-10T13:37:56.246001+0000","last_fullsized":"2026-03-10T13:37:56.246001+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:26.693385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605314+0000","last_change":"2026-03-10T13:37:48.258219+0000","last_active":"2026-03-10T13:37:55.605314+0000","last_peered":"2026-03-10T13:37:55.605314+0000","last_clean":"2026-03-10T13:37:55.605314+0000","last_became_active":"2026-03-10T13:37:48.258120+0000","last_became_peered":"2026-03-10T13:37:48.258120+0000","last_unstale":"2026-03-10T13:37:55.605314+0000","last_undegraded":"2026-03-10T13:37:55.605314+0000","last_fullsized":"2026-03-10T13:37:55.605314+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:09:26.202966+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.792802+0000","last_change":"2026-03-10T13:37:52.266948+0000","last_active":"2026-03-10T13:37:55.792802+0000","last_peered":"2026-03-10T13:37:55.792802+0000","last_clean":"2026-03-10T13:37:55.792802+0000","last_became_active":"2026-03-10T13:37:52.266858+0000","last_became_peered":"2026-03-10T13:37:52.266858+0000","last_unstale":"2026-03-10T13:37:55.792802+0000","last_undegraded":"2026-03-10T13:37:55.792802+0000","last_fullsized":"2026-03-10T13:37:55.792802+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:05:08.260539+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"58'1","reported_seq":16,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595835+0000","last_change":"2026-03-10T13:37:54.368333+0000","last_active":"2026-03-10T13:37:55.595835+0000","last_peered":"2026-03-10T13:37:55.595835+0000","last_clean":"2026-03-10T13:37:55.595835+0000","last_became_active":"2026-03-10T13:37:54.368106+0000","last_became_peered":"2026-03-10T13:37:54.368106+0000","last_unstale":"2026-03-10T13:37:55.595835+0000","last_undegraded":"2026-03-10T13:37:55.595835+0000","last_fullsized":"2026-03-10T13:37:55.595835+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:27:14.776182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.230485+0000","last_change":"2026-03-10T13:37:50.284797+0000","last_active":"2026-03-10T13:37:56.230485+0000","last_peered":"2026-03-10T13:37:56.230485+0000","last_clean":"2026-03-10T13:37:56.230485+0000","last_became_active":"2026-03-10T13:37:50.284487+0000","last_became_peered":"2026-03-10T13:37:50.284487+0000","last_unstale":"2026-03-10T13:37:56.230485+0000","last_undegraded":"2026-03-10T13:37:56.230485+0000","last_fullsized":"2026-03-10T13:37:56.230485+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:13:30.415464+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594619+0000","last_change":"2026-03-10T13:37:48.263778+0000","last_active":"2026-03-10T13:37:55.594619+0000","last_peered":"2026-03-10T13:37:55.594619+0000","last_clean":"2026-03-10T13:37:55.594619+0000","last_became_active":"2026-03-10T13:37:48.263579+0000","last_became_peered":"2026-03-10T13:37:48.263579+0000","last_unstale":"2026-03-10T13:37:55.594619+0000","last_undegraded":"2026-03-10T13:37:55.594619+0000","last_fullsized":"2026-03-10T13:37:55.594619+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:29.980141+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.776431+0000","last_change":"2026-03-10T13:37:52.338760+0000","last_active":"2026-03-10T13:37:55.776431+0000","last_peered":"2026-03-10T13:37:55.776431+0000","last_clean":"2026-03-10T13:37:55.776431+0000","last_became_active":"2026-03-10T13:37:52.338399+0000","last_became_peered":"2026-03-10T13:37:52.338399+0000","last_unstale":"2026-03-10T13:37:55.776431+0000","last_undegraded":"2026-03-10T13:37:55.776431+0000","last_fullsized":"2026-03-10T13:37:55.776431+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:57:39.442630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.604991+0000","last_change":"2026-03-10T13:37:54.377632+0000","last_active":"2026-03-10T13:37:55.604991+0000","last_peered":"2026-03-10T13:37:55.604991+0000","last_clean":"2026-03-10T13:37:55.604991+0000","last_became_active":"2026-03-10T13:37:54.377530+0000","last_became_peered":"2026-03-10T13:37:54.377530+0000","last_unstale":"2026-03-10T13:37:55.604991+0000","last_undegraded":"2026-03-10T13:37:55.604991+0000","last_fullsized":"2026-03-10T13:37:55.604991+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:34.057127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.219649+0000","last_change":"2026-03-10T13:37:50.294733+0000","last_active":"2026-03-10T13:37:56.219649+0000","last_peered":"2026-03-10T13:37:56.219649+0000","last_clean":"2026-03-10T13:37:56.219649+0000","last_became_active":"2026-03-10T13:37:50.293901+0000","last_became_peered":"2026-03-10T13:37:50.293901+0000","last_unstale":"2026-03-10T13:37:56.219649+0000","last_undegraded":"2026-03-10T13:37:56.219649+0000","last_fullsized":"2026-03-10T13:37:56.219649+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:32:16.590805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596378+0000","last_change":"2026-03-10T13:37:48.257752+0000","last_active":"2026-03-10T13:37:55.596378+0000","last_peered":"2026-03-10T13:37:55.596378+0000","last_clean":"2026-03-10T13:37:55.596378+0000","last_became_active":"2026-03-10T13:37:48.256678+0000","last_became_peered":"2026-03-10T13:37:48.256678+0000","last_unstale":"2026-03-10T13:37:55.596378+0000","last_undegraded":"2026-03-10T13:37:55.596378+0000","last_fullsized":"2026-03-10T13:37:55.596378+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:13:57.296676+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.616862+0000","last_change":"2026-03-10T13:37:52.288481+0000","last_active":"2026-03-10T13:37:55.616862+0000","last_peered":"2026-03-10T13:37:55.616862+0000","last_clean":"2026-03-10T13:37:55.616862+0000","last_became_active":"2026-03-10T13:37:52.288322+0000","last_became_peered":"2026-03-10T13:37:52.288322+0000","last_unstale":"2026-03-10T13:37:55.616862+0000","last_undegraded":"2026-03-10T13:37:55.616862+0000","last_fullsized":"2026-03-10T13:37:55.616862+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:54:35.899719+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602534+0000","last_change":"2026-03-10T13:37:54.371650+0000","last_active":"2026-03-10T13:37:55.602534+0000","last_peered":"2026-03-10T13:37:55.602534+0000","last_clean":"2026-03-10T13:37:55.602534+0000","last_became_active":"2026-03-10T13:37:54.371553+0000","last_became_peered":"2026-03-10T13:37:54.371553+0000","last_unstale":"2026-03-10T13:37:55.602534+0000","last_undegraded":"2026-03-10T13:37:55.602534+0000","last_fullsized":"2026-03-10T13:37:55.602534+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:17:43.593519+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.116785+0000","last_change":"2026-03-10T13:37:50.285101+0000","last_active":"2026-03-10T13:37:56.116785+0000","last_peered":"2026-03-10T13:37:56.116785+0000","last_clean":"2026-03-10T13:37:56.116785+0000","last_became_active":"2026-03-10T13:37:50.283745+0000","last_became_peered":"2026-03-10T13:37:50.283745+0000","last_unstale":"2026-03-10T13:37:56.116785+0000","last_undegraded":"2026-03-10T13:37:56.116785+0000","last_fullsized":"2026-03-10T13:37:56.116785+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:19:37.172925+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594584+0000","last_change":"2026-03-10T13:37:48.263844+0000","last_active":"2026-03-10T13:37:55.594584+0000","last_peered":"2026-03-10T13:37:55.594584+0000","last_clean":"2026-03-10T13:37:55.594584+0000","last_became_active":"2026-03-10T13:37:48.263702+0000","last_became_peered":"2026-03-10T13:37:48.263702+0000","last_unstale":"2026-03-10T13:37:55.594584+0000","last_undegraded":"2026-03-10T13:37:55.594584+0000","last_fullsized":"2026-03-10T13:37:55.594584+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:45:11.766444+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603374+0000","last_change":"2026-03-10T13:37:52.288313+0000","last_active":"2026-03-10T13:37:55.603374+0000","last_peered":"2026-03-10T13:37:55.603374+0000","last_clean":"2026-03-10T13:37:55.603374+0000","last_became_active":"2026-03-10T13:37:52.287426+0000","last_became_peered":"2026-03-10T13:37:52.287426+0000","last_unstale":"2026-03-10T13:37:55.603374+0000","last_undegraded":"2026-03-10T13:37:55.603374+0000","last_fullsized":"2026-03-10T13:37:55.603374+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:37:27.552906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605074+0000","last_change":"2026-03-10T13:37:54.368623+0000","last_active":"2026-03-10T13:37:55.605074+0000","last_peered":"2026-03-10T13:37:55.605074+0000","last_clean":"2026-03-10T13:37:55.605074+0000","last_became_active":"2026-03-10T13:37:54.368437+0000","last_became_peered":"2026-03-10T13:37:54.368437+0000","last_unstale":"2026-03-10T13:37:55.605074+0000","last_undegraded":"2026-03-10T13:37:55.605074+0000","last_fullsized":"2026-03-10T13:37:55.605074+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:14:31.149045+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.285128+0000","last_change":"2026-03-10T13:37:50.276082+0000","last_active":"2026-03-10T13:37:56.285128+0000","last_peered":"2026-03-10T13:37:56.285128+0000","last_clean":"2026-03-10T13:37:56.285128+0000","last_became_active":"2026-03-10T13:37:50.275886+0000","last_became_peered":"2026-03-10T13:37:50.275886+0000","last_unstale":"2026-03-10T13:37:56.285128+0000","last_undegraded":"2026-03-10T13:37:56.285128+0000","last_fullsized":"2026-03-10T13:37:56.285128+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:42:30.504771+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629199+0000","last_change":"2026-03-10T13:37:48.244177+0000","last_active":"2026-03-10T13:37:55.629199+0000","last_peered":"2026-03-10T13:37:55.629199+0000","last_clean":"2026-03-10T13:37:55.629199+0000","last_became_active":"2026-03-10T13:37:48.244050+0000","last_became_peered":"2026-03-10T13:37:48.244050+0000","last_unstale":"2026-03-10T13:37:55.629199+0000","last_undegraded":"2026-03-10T13:37:55.629199+0000","last_fullsized":"2026-03-10T13:37:55.629199+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:38:21.326943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594563+0000","last_change":"2026-03-10T13:37:52.277330+0000","last_active":"2026-03-10T13:37:55.594563+0000","last_peered":"2026-03-10T13:37:55.594563+0000","last_clean":"2026-03-10T13:37:55.594563+0000","last_became_active":"2026-03-10T13:37:52.277147+0000","last_became_peered":"2026-03-10T13:37:52.277147+0000","last_unstale":"2026-03-10T13:37:55.594563+0000","last_undegraded":"2026-03-10T13:37:55.594563+0000","last_fullsized":"2026-03-10T13:37:55.594563+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:32:54.063417+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603560+0000","last_change":"2026-03-10T13:37:54.385210+0000","last_active":"2026-03-10T13:37:55.603560+0000","last_peered":"2026-03-10T13:37:55.603560+0000","last_clean":"2026-03-10T13:37:55.603560+0000","last_became_active":"2026-03-10T13:37:54.384453+0000","last_became_peered":"2026-03-10T13:37:54.384453+0000","last_unstale":"2026-03-10T13:37:55.603560+0000","last_undegraded":"2026-03-10T13:37:55.603560+0000","last_fullsized":"2026-03-10T13:37:55.603560+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:06:28.083704+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"58'4","reported_seq":29,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.996704+0000","last_change":"2026-03-10T13:37:50.284946+0000","last_active":"2026-03-10T13:37:55.996704+0000","last_peered":"2026-03-10T13:37:55.996704+0000","last_clean":"2026-03-10T13:37:55.996704+0000","last_became_active":"2026-03-10T13:37:50.284723+0000","last_became_peered":"2026-03-10T13:37:50.284723+0000","last_unstale":"2026-03-10T13:37:55.996704+0000","last_undegraded":"2026-03-10T13:37:55.996704+0000","last_fullsized":"2026-03-10T13:37:55.996704+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:17.113471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605241+0000","last_change":"2026-03-10T13:37:48.259317+0000","last_active":"2026-03-10T13:37:55.605241+0000","last_peered":"2026-03-10T13:37:55.605241+0000","last_clean":"2026-03-10T13:37:55.605241+0000","last_became_active":"2026-03-10T13:37:48.259224+0000","last_became_peered":"2026-03-10T13:37:48.259224+0000","last_unstale":"2026-03-10T13:37:55.605241+0000","last_undegraded":"2026-03-10T13:37:55.605241+0000","last_fullsized":"2026-03-10T13:37:55.605241+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:22:05.122953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.637109+0000","last_change":"2026-03-10T13:37:52.275662+0000","last_active":"2026-03-10T13:37:55.637109+0000","last_peered":"2026-03-10T13:37:55.637109+0000","last_clean":"2026-03-10T13:37:55.637109+0000","last_became_active":"2026-03-10T13:37:52.275540+0000","last_became_peered":"2026-03-10T13:37:52.275540+0000","last_unstale":"2026-03-10T13:37:55.637109+0000","last_undegraded":"2026-03-10T13:37:55.637109+0000","last_fullsized":"2026-03-10T13:37:55.637109+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:45.609077+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594477+0000","last_change":"2026-03-10T13:37:54.369869+0000","last_active":"2026-03-10T13:37:55.594477+0000","last_peered":"2026-03-10T13:37:55.594477+0000","last_clean":"2026-03-10T13:37:55.594477+0000","last_became_active":"2026-03-10T13:37:54.369779+0000","last_became_peered":"2026-03-10T13:37:54.369779+0000","last_unstale":"2026-03-10T13:37:55.594477+0000","last_undegraded":"2026-03-10T13:37:55.594477+0000","last_fullsized":"2026-03-10T13:37:55.594477+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:15:20.019452+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617456+0000","last_change":"2026-03-10T13:37:54.351436+0000","last_active":"2026-03-10T13:37:55.617456+0000","last_peered":"2026-03-10T13:37:55.617456+0000","last_clean":"2026-03-10T13:37:55.617456+0000","last_became_active":"2026-03-10T13:37:54.351285+0000","last_became_peered":"2026-03-10T13:37:54.351285+0000","last_unstale":"2026-03-10T13:37:55.617456+0000","last_undegraded":"2026-03-10T13:37:55.617456+0000","last_fullsized":"2026-03-10T13:37:55.617456+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:17.347531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603033+0000","last_change":"2026-03-10T13:37:48.255695+0000","last_active":"2026-03-10T13:37:55.603033+0000","last_peered":"2026-03-10T13:37:55.603033+0000","last_clean":"2026-03-10T13:37:55.603033+0000","last_became_active":"2026-03-10T13:37:48.255584+0000","last_became_peered":"2026-03-10T13:37:48.255584+0000","last_unstale":"2026-03-10T13:37:55.603033+0000","last_undegraded":"2026-03-10T13:37:55.603033+0000","last_fullsized":"2026-03-10T13:37:55.603033+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:50:23.329268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.287899+0000","last_change":"2026-03-10T13:37:50.280444+0000","last_active":"2026-03-10T13:37:56.287899+0000","last_peered":"2026-03-10T13:37:56.287899+0000","last_clean":"2026-03-10T13:37:56.287899+0000","last_became_active":"2026-03-10T13:37:50.280101+0000","last_became_peered":"2026-03-10T13:37:50.280101+0000","last_unstale":"2026-03-10T13:37:56.287899+0000","last_undegraded":"2026-03-10T13:37:56.287899+0000","last_fullsized":"2026-03-10T13:37:56.287899+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:12:07.663854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605474+0000","last_change":"2026-03-10T13:37:52.335288+0000","last_active":"2026-03-10T13:37:55.605474+0000","last_peered":"2026-03-10T13:37:55.605474+0000","last_clean":"2026-03-10T13:37:55.605474+0000","last_became_active":"2026-03-10T13:37:52.335140+0000","last_became_peered":"2026-03-10T13:37:52.335140+0000","last_unstale":"2026-03-10T13:37:55.605474+0000","last_undegraded":"2026-03-10T13:37:55.605474+0000","last_fullsized":"2026-03-10T13:37:55.605474+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:30:19.241412+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":7,"num_read_kb":2,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2777088,"data_stored":2755680,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":46,"seq":197568495621,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27860,"kb_used_data":1028,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939564,"statfs":{"total":21470642176,"available":21442113536,"internally_reserved":0,"allocated":1052672,"data_stored":681753,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":41,"seq":176093659143,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27844,"kb_used_data":1004,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939580,"statfs":{"total":21470642176,"available":21442129920,"internally_reserved":0,"allocated":1028096,"data_stored":679675,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":35,"seq":150323855370,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27408,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940016,"statfs":{"total":21470642176,"available":21442576384,"internally_reserved":0,"allocated":577536,"data_stored":221203,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084300,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27420,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940004,"statfs":{"total":21470642176,"available":21442564096,"internally_reserved":0,"allocated":602112,"data_stored":222006,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":24,"seq":103079215118,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27432,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939992,"statfs":{"total":21470642176,"available":21442551808,"internally_reserved":0,"allocated":602112,"data_stored":221286,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":17,"seq":73014444048,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27396,"kb_used_data":556,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940028,"statfs":{"total":21470642176,"available":21442588672,"internally_reserved":0,"allocated":569344,"data_stored":220992,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574866,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27452,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939972,"statfs":{"total":21470642176,"available":21442531328,"internally_reserved":0,"allocated":634880,"data_stored":222616,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705684,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27864,"kb_used_data":1032,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939560,"statfs":{"total":21470642176,"available":21442109440,"internally_reserved":0,"allocated":1056768,"data_stored":681462,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:38:03.164 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph pg dump --format=json 2026-03-10T13:38:03.376 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.513+0000 7f7e7aaec640 1 -- 192.168.123.105:0/3135251198 >> v1:192.168.123.105:6789/0 conn(0x7f7e7410a910 legacy=0x7f7e7410acf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- 192.168.123.105:0/3135251198 shutdown_connections 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- 192.168.123.105:0/3135251198 >> 192.168.123.105:0/3135251198 conn(0x7f7e741005f0 msgr2=0x7f7e74102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- 192.168.123.105:0/3135251198 shutdown_connections 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- 192.168.123.105:0/3135251198 wait complete. 2026-03-10T13:38:03.514 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 Processor -- start 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- start start 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7e741110a0 con 0x7f7e7410a910 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7e741b9a70 con 0x7f7e7410d7c0 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.514+0000 7f7e7aaec640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7e741bac50 con 0x7f7e74111360 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e78861640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7f7e7410a910 0x7f7e74110950 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:60442/0 (socket says 192.168.123.105:60442) 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e73fff640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f7e7410d7c0 0x7f7e7407a560 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:42246/0 (socket says 192.168.123.105:42246) 2026-03-10T13:38:03.515 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e73fff640 1 -- 192.168.123.105:0/2873154150 learned_addr learned my addr 192.168.123.105:0/2873154150 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3268556648 0 0) 0x7f7e741b9a70 con 0x7f7e7410d7c0 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7e48003620 con 0x7f7e7410d7c0 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.515+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2496193037 0 0) 0x7f7e741110a0 con 0x7f7e7410a910 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f7e741b9a70 con 0x7f7e7410a910 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2745277410 0 0) 0x7f7e48003620 con 0x7f7e7410d7c0 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7e741110a0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7e680035c0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 280138225 0 0) 0x7f7e741b9a70 con 0x7f7e7410a910 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f7e48003620 con 0x7f7e7410a910 2026-03-10T13:38:03.516 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7e60004500 con 0x7f7e7410a910 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1737145908 0 0) 0x7f7e741110a0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.516+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 >> v1:192.168.123.105:6790/0 conn(0x7f7e74111360 legacy=0x7f7e7407ac70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 >> v1:192.168.123.105:6789/0 conn(0x7f7e7410a910 legacy=0x7f7e74110950 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7e741bbe30 con 0x7f7e7410d7c0 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f7e741b8ac0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f7e741b9020 con 0x7f7e7410d7c0 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f7e680038c0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.517 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.517+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f7e680061b0 con 0x7f7e7410d7c0 2026-03-10T13:38:03.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.518+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7e74106050 con 0x7f7e7410d7c0 2026-03-10T13:38:03.518 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.518+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 16) ==== 100051+0+0 (unknown 1317089487 0 0) 0x7f7e68003d40 con 0x7f7e7410d7c0 2026-03-10T13:38:03.519 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.519+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f7e68095610 con 0x7f7e7410d7c0 2026-03-10T13:38:03.521 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.521+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f7e6805e580 con 0x7f7e7410d7c0 2026-03-10T13:38:03.614 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.613+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 --> v1:192.168.123.105:6800/3845654103 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f7e7410f1e0 con 0x7f7e48078190 2026-03-10T13:38:03.618 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.618+0000 7f7e71ffb640 1 -- 192.168.123.105:0/2873154150 <== mgr.14150 v1:192.168.123.105:6800/3845654103 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+347370 (unknown 2965378022 0 1952213287) 0x7f7e60003490 con 0x7f7e48078190 2026-03-10T13:38:03.619 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:38:03.621 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T13:38:03.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 >> v1:192.168.123.105:6800/3845654103 conn(0x7f7e48078190 legacy=0x7f7e4807a650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 >> v1:192.168.123.109:6789/0 conn(0x7f7e7410d7c0 legacy=0x7f7e7407a560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:03.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 shutdown_connections 2026-03-10T13:38:03.623 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 >> 192.168.123.105:0/2873154150 conn(0x7f7e741005f0 msgr2=0x7f7e74114790 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:03.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 shutdown_connections 2026-03-10T13:38:03.624 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:03.623+0000 7f7e7aaec640 1 -- 192.168.123.105:0/2873154150 wait complete. 2026-03-10T13:38:03.796 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":116,"stamp":"2026-03-10T13:38:02.558084+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":775,"num_read_kb":518,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":389,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220676,"kb_used_data":5980,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518716,"statfs":{"total":171765137408,"available":171539165184,"internally_reserved":0,"allocated":6123520,"data_stored":3150993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4325,"num_objects":183,"num_object_clones":0,"num_object_copies":549,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":183,"num_whiteouts":0,"num_read":705,"num_read_kb":461,"num_write":421,"num_write_kb":35,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001841"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605435+0000","last_change":"2026-03-10T13:37:48.261382+0000","last_active":"2026-03-10T13:37:55.605435+0000","last_peered":"2026-03-10T13:37:55.605435+0000","last_clean":"2026-03-10T13:37:55.605435+0000","last_became_active":"2026-03-10T13:37:48.261230+0000","last_became_peered":"2026-03-10T13:37:48.261230+0000","last_unstale":"2026-03-10T13:37:55.605435+0000","last_undegraded":"2026-03-10T13:37:55.605435+0000","last_fullsized":"2026-03-10T13:37:55.605435+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:45:43.801591+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.935315+0000","last_change":"2026-03-10T13:37:50.283127+0000","last_active":"2026-03-10T13:37:55.935315+0000","last_peered":"2026-03-10T13:37:55.935315+0000","last_clean":"2026-03-10T13:37:55.935315+0000","last_became_active":"2026-03-10T13:37:50.282819+0000","last_became_peered":"2026-03-10T13:37:50.282819+0000","last_unstale":"2026-03-10T13:37:55.935315+0000","last_undegraded":"2026-03-10T13:37:55.935315+0000","last_fullsized":"2026-03-10T13:37:55.935315+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:16:08.578891+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.616923+0000","last_change":"2026-03-10T13:37:52.288423+0000","last_active":"2026-03-10T13:37:55.616923+0000","last_peered":"2026-03-10T13:37:55.616923+0000","last_clean":"2026-03-10T13:37:55.616923+0000","last_became_active":"2026-03-10T13:37:52.288205+0000","last_became_peered":"2026-03-10T13:37:52.288205+0000","last_unstale":"2026-03-10T13:37:55.616923+0000","last_undegraded":"2026-03-10T13:37:55.616923+0000","last_fullsized":"2026-03-10T13:37:55.616923+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:28:39.839601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596842+0000","last_change":"2026-03-10T13:37:54.347025+0000","last_active":"2026-03-10T13:37:55.596842+0000","last_peered":"2026-03-10T13:37:55.596842+0000","last_clean":"2026-03-10T13:37:55.596842+0000","last_became_active":"2026-03-10T13:37:54.346925+0000","last_became_peered":"2026-03-10T13:37:54.346925+0000","last_unstale":"2026-03-10T13:37:55.596842+0000","last_undegraded":"2026-03-10T13:37:55.596842+0000","last_fullsized":"2026-03-10T13:37:55.596842+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:17:06.595218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602871+0000","last_change":"2026-03-10T13:37:54.385278+0000","last_active":"2026-03-10T13:37:55.602871+0000","last_peered":"2026-03-10T13:37:55.602871+0000","last_clean":"2026-03-10T13:37:55.602871+0000","last_became_active":"2026-03-10T13:37:54.384853+0000","last_became_peered":"2026-03-10T13:37:54.384853+0000","last_unstale":"2026-03-10T13:37:55.602871+0000","last_undegraded":"2026-03-10T13:37:55.602871+0000","last_fullsized":"2026-03-10T13:37:55.602871+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:40:10.603601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602927+0000","last_change":"2026-03-10T13:37:48.246183+0000","last_active":"2026-03-10T13:37:55.602927+0000","last_peered":"2026-03-10T13:37:55.602927+0000","last_clean":"2026-03-10T13:37:55.602927+0000","last_became_active":"2026-03-10T13:37:48.246098+0000","last_became_peered":"2026-03-10T13:37:48.246098+0000","last_unstale":"2026-03-10T13:37:55.602927+0000","last_undegraded":"2026-03-10T13:37:55.602927+0000","last_fullsized":"2026-03-10T13:37:55.602927+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:13.642013+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.292448+0000","last_change":"2026-03-10T13:37:50.263242+0000","last_active":"2026-03-10T13:37:56.292448+0000","last_peered":"2026-03-10T13:37:56.292448+0000","last_clean":"2026-03-10T13:37:56.292448+0000","last_became_active":"2026-03-10T13:37:50.263011+0000","last_became_peered":"2026-03-10T13:37:50.263011+0000","last_unstale":"2026-03-10T13:37:56.292448+0000","last_undegraded":"2026-03-10T13:37:56.292448+0000","last_fullsized":"2026-03-10T13:37:56.292448+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:53.502158+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595906+0000","last_change":"2026-03-10T13:37:52.288304+0000","last_active":"2026-03-10T13:37:55.595906+0000","last_peered":"2026-03-10T13:37:55.595906+0000","last_clean":"2026-03-10T13:37:55.595906+0000","last_became_active":"2026-03-10T13:37:52.279374+0000","last_became_peered":"2026-03-10T13:37:52.279374+0000","last_unstale":"2026-03-10T13:37:55.595906+0000","last_undegraded":"2026-03-10T13:37:55.595906+0000","last_fullsized":"2026-03-10T13:37:55.595906+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:42:36.264701+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629042+0000","last_change":"2026-03-10T13:37:48.261987+0000","last_active":"2026-03-10T13:37:55.629042+0000","last_peered":"2026-03-10T13:37:55.629042+0000","last_clean":"2026-03-10T13:37:55.629042+0000","last_became_active":"2026-03-10T13:37:48.261859+0000","last_became_peered":"2026-03-10T13:37:48.261859+0000","last_unstale":"2026-03-10T13:37:55.629042+0000","last_undegraded":"2026-03-10T13:37:55.629042+0000","last_fullsized":"2026-03-10T13:37:55.629042+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:38:33.629402+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.134466+0000","last_change":"2026-03-10T13:37:50.269444+0000","last_active":"2026-03-10T13:37:56.134466+0000","last_peered":"2026-03-10T13:37:56.134466+0000","last_clean":"2026-03-10T13:37:56.134466+0000","last_became_active":"2026-03-10T13:37:50.269346+0000","last_became_peered":"2026-03-10T13:37:50.269346+0000","last_unstale":"2026-03-10T13:37:56.134466+0000","last_undegraded":"2026-03-10T13:37:56.134466+0000","last_fullsized":"2026-03-10T13:37:56.134466+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:47:35.869366+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628918+0000","last_change":"2026-03-10T13:37:52.334700+0000","last_active":"2026-03-10T13:37:55.628918+0000","last_peered":"2026-03-10T13:37:55.628918+0000","last_clean":"2026-03-10T13:37:55.628918+0000","last_became_active":"2026-03-10T13:37:52.334521+0000","last_became_peered":"2026-03-10T13:37:52.334521+0000","last_unstale":"2026-03-10T13:37:55.628918+0000","last_undegraded":"2026-03-10T13:37:55.628918+0000","last_fullsized":"2026-03-10T13:37:55.628918+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:53:59.212198+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605514+0000","last_change":"2026-03-10T13:37:54.368727+0000","last_active":"2026-03-10T13:37:55.605514+0000","last_peered":"2026-03-10T13:37:55.605514+0000","last_clean":"2026-03-10T13:37:55.605514+0000","last_became_active":"2026-03-10T13:37:54.368260+0000","last_became_peered":"2026-03-10T13:37:54.368260+0000","last_unstale":"2026-03-10T13:37:55.605514+0000","last_undegraded":"2026-03-10T13:37:55.605514+0000","last_fullsized":"2026-03-10T13:37:55.605514+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:47:04.304579+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629079+0000","last_change":"2026-03-10T13:37:48.261678+0000","last_active":"2026-03-10T13:37:55.629079+0000","last_peered":"2026-03-10T13:37:55.629079+0000","last_clean":"2026-03-10T13:37:55.629079+0000","last_became_active":"2026-03-10T13:37:48.261573+0000","last_became_peered":"2026-03-10T13:37:48.261573+0000","last_unstale":"2026-03-10T13:37:55.629079+0000","last_undegraded":"2026-03-10T13:37:55.629079+0000","last_fullsized":"2026-03-10T13:37:55.629079+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:57:07.970943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"58'5","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.085108+0000","last_change":"2026-03-10T13:37:50.294812+0000","last_active":"2026-03-10T13:37:56.085108+0000","last_peered":"2026-03-10T13:37:56.085108+0000","last_clean":"2026-03-10T13:37:56.085108+0000","last_became_active":"2026-03-10T13:37:50.294012+0000","last_became_peered":"2026-03-10T13:37:50.294012+0000","last_unstale":"2026-03-10T13:37:56.085108+0000","last_undegraded":"2026-03-10T13:37:56.085108+0000","last_fullsized":"2026-03-10T13:37:56.085108+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:01:37.351580+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594513+0000","last_change":"2026-03-10T13:37:52.277265+0000","last_active":"2026-03-10T13:37:55.594513+0000","last_peered":"2026-03-10T13:37:55.594513+0000","last_clean":"2026-03-10T13:37:55.594513+0000","last_became_active":"2026-03-10T13:37:52.276994+0000","last_became_peered":"2026-03-10T13:37:52.276994+0000","last_unstale":"2026-03-10T13:37:55.594513+0000","last_undegraded":"2026-03-10T13:37:55.594513+0000","last_fullsized":"2026-03-10T13:37:55.594513+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:55.170143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629151+0000","last_change":"2026-03-10T13:37:54.362426+0000","last_active":"2026-03-10T13:37:55.629151+0000","last_peered":"2026-03-10T13:37:55.629151+0000","last_clean":"2026-03-10T13:37:55.629151+0000","last_became_active":"2026-03-10T13:37:54.362139+0000","last_became_peered":"2026-03-10T13:37:54.362139+0000","last_unstale":"2026-03-10T13:37:55.629151+0000","last_undegraded":"2026-03-10T13:37:55.629151+0000","last_fullsized":"2026-03-10T13:37:55.629151+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:00:49.980455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596717+0000","last_change":"2026-03-10T13:37:54.384072+0000","last_active":"2026-03-10T13:37:55.596717+0000","last_peered":"2026-03-10T13:37:55.596717+0000","last_clean":"2026-03-10T13:37:55.596717+0000","last_became_active":"2026-03-10T13:37:54.383400+0000","last_became_peered":"2026-03-10T13:37:54.383400+0000","last_unstale":"2026-03-10T13:37:55.596717+0000","last_undegraded":"2026-03-10T13:37:55.596717+0000","last_fullsized":"2026-03-10T13:37:55.596717+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:24:49.178116+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.606077+0000","last_change":"2026-03-10T13:37:48.265883+0000","last_active":"2026-03-10T13:37:55.606077+0000","last_peered":"2026-03-10T13:37:55.606077+0000","last_clean":"2026-03-10T13:37:55.606077+0000","last_became_active":"2026-03-10T13:37:48.265512+0000","last_became_peered":"2026-03-10T13:37:48.265512+0000","last_unstale":"2026-03-10T13:37:55.606077+0000","last_undegraded":"2026-03-10T13:37:55.606077+0000","last_fullsized":"2026-03-10T13:37:55.606077+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:56:00.545045+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.181665+0000","last_change":"2026-03-10T13:37:50.272119+0000","last_active":"2026-03-10T13:37:56.181665+0000","last_peered":"2026-03-10T13:37:56.181665+0000","last_clean":"2026-03-10T13:37:56.181665+0000","last_became_active":"2026-03-10T13:37:50.272043+0000","last_became_peered":"2026-03-10T13:37:50.272043+0000","last_unstale":"2026-03-10T13:37:56.181665+0000","last_undegraded":"2026-03-10T13:37:56.181665+0000","last_fullsized":"2026-03-10T13:37:56.181665+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:32:58.947398+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617112+0000","last_change":"2026-03-10T13:37:52.289053+0000","last_active":"2026-03-10T13:37:55.617112+0000","last_peered":"2026-03-10T13:37:55.617112+0000","last_clean":"2026-03-10T13:37:55.617112+0000","last_became_active":"2026-03-10T13:37:52.288908+0000","last_became_peered":"2026-03-10T13:37:52.288908+0000","last_unstale":"2026-03-10T13:37:55.617112+0000","last_undegraded":"2026-03-10T13:37:55.617112+0000","last_fullsized":"2026-03-10T13:37:55.617112+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:53:54.242254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602789+0000","last_change":"2026-03-10T13:37:54.385176+0000","last_active":"2026-03-10T13:37:55.602789+0000","last_peered":"2026-03-10T13:37:55.602789+0000","last_clean":"2026-03-10T13:37:55.602789+0000","last_became_active":"2026-03-10T13:37:54.384954+0000","last_became_peered":"2026-03-10T13:37:54.384954+0000","last_unstale":"2026-03-10T13:37:55.602789+0000","last_undegraded":"2026-03-10T13:37:55.602789+0000","last_fullsized":"2026-03-10T13:37:55.602789+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:52:00.808471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596474+0000","last_change":"2026-03-10T13:37:48.256424+0000","last_active":"2026-03-10T13:37:55.596474+0000","last_peered":"2026-03-10T13:37:55.596474+0000","last_clean":"2026-03-10T13:37:55.596474+0000","last_became_active":"2026-03-10T13:37:48.256124+0000","last_became_peered":"2026-03-10T13:37:48.256124+0000","last_unstale":"2026-03-10T13:37:55.596474+0000","last_undegraded":"2026-03-10T13:37:55.596474+0000","last_fullsized":"2026-03-10T13:37:55.596474+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:12:26.771941+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"58'12","reported_seq":46,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.299010+0000","last_change":"2026-03-10T13:37:50.279840+0000","last_active":"2026-03-10T13:37:56.299010+0000","last_peered":"2026-03-10T13:37:56.299010+0000","last_clean":"2026-03-10T13:37:56.299010+0000","last_became_active":"2026-03-10T13:37:50.279746+0000","last_became_peered":"2026-03-10T13:37:50.279746+0000","last_unstale":"2026-03-10T13:37:56.299010+0000","last_undegraded":"2026-03-10T13:37:56.299010+0000","last_fullsized":"2026-03-10T13:37:56.299010+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:28:32.598870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596323+0000","last_change":"2026-03-10T13:37:52.274977+0000","last_active":"2026-03-10T13:37:55.596323+0000","last_peered":"2026-03-10T13:37:55.596323+0000","last_clean":"2026-03-10T13:37:55.596323+0000","last_became_active":"2026-03-10T13:37:52.274630+0000","last_became_peered":"2026-03-10T13:37:52.274630+0000","last_unstale":"2026-03-10T13:37:55.596323+0000","last_undegraded":"2026-03-10T13:37:55.596323+0000","last_fullsized":"2026-03-10T13:37:55.596323+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:55:22.186174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"58'1","reported_seq":16,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595245+0000","last_change":"2026-03-10T13:37:54.369275+0000","last_active":"2026-03-10T13:37:55.595245+0000","last_peered":"2026-03-10T13:37:55.595245+0000","last_clean":"2026-03-10T13:37:55.595245+0000","last_became_active":"2026-03-10T13:37:54.369142+0000","last_became_peered":"2026-03-10T13:37:54.369142+0000","last_unstale":"2026-03-10T13:37:55.595245+0000","last_undegraded":"2026-03-10T13:37:55.595245+0000","last_fullsized":"2026-03-10T13:37:55.595245+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:49:57.927606+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"50'1","reported_seq":28,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617351+0000","last_change":"2026-03-10T13:37:48.260445+0000","last_active":"2026-03-10T13:37:55.617351+0000","last_peered":"2026-03-10T13:37:55.617351+0000","last_clean":"2026-03-10T13:37:55.617351+0000","last_became_active":"2026-03-10T13:37:48.260209+0000","last_became_peered":"2026-03-10T13:37:48.260209+0000","last_unstale":"2026-03-10T13:37:55.617351+0000","last_undegraded":"2026-03-10T13:37:55.617351+0000","last_fullsized":"2026-03-10T13:37:55.617351+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:17:07.458536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.279542+0000","last_change":"2026-03-10T13:37:50.275138+0000","last_active":"2026-03-10T13:37:56.279542+0000","last_peered":"2026-03-10T13:37:56.279542+0000","last_clean":"2026-03-10T13:37:56.279542+0000","last_became_active":"2026-03-10T13:37:50.274796+0000","last_became_peered":"2026-03-10T13:37:50.274796+0000","last_unstale":"2026-03-10T13:37:56.279542+0000","last_undegraded":"2026-03-10T13:37:56.279542+0000","last_fullsized":"2026-03-10T13:37:56.279542+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:23:31.612371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"58'8","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.755821+0000","last_change":"2026-03-10T13:37:52.275739+0000","last_active":"2026-03-10T13:37:55.755821+0000","last_peered":"2026-03-10T13:37:55.755821+0000","last_clean":"2026-03-10T13:37:55.755821+0000","last_became_active":"2026-03-10T13:37:52.275525+0000","last_became_peered":"2026-03-10T13:37:52.275525+0000","last_unstale":"2026-03-10T13:37:55.755821+0000","last_undegraded":"2026-03-10T13:37:55.755821+0000","last_fullsized":"2026-03-10T13:37:55.755821+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:55:06.765772+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.309345+0000","last_change":"2026-03-10T13:37:50.285322+0000","last_active":"2026-03-10T13:37:56.309345+0000","last_peered":"2026-03-10T13:37:56.309345+0000","last_clean":"2026-03-10T13:37:56.309345+0000","last_became_active":"2026-03-10T13:37:50.284883+0000","last_became_peered":"2026-03-10T13:37:50.284883+0000","last_unstale":"2026-03-10T13:37:56.309345+0000","last_undegraded":"2026-03-10T13:37:56.309345+0000","last_fullsized":"2026-03-10T13:37:56.309345+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:04:04.846287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603009+0000","last_change":"2026-03-10T13:37:48.245427+0000","last_active":"2026-03-10T13:37:55.603009+0000","last_peered":"2026-03-10T13:37:55.603009+0000","last_clean":"2026-03-10T13:37:55.603009+0000","last_became_active":"2026-03-10T13:37:48.245298+0000","last_became_peered":"2026-03-10T13:37:48.245298+0000","last_unstale":"2026-03-10T13:37:55.603009+0000","last_undegraded":"2026-03-10T13:37:55.603009+0000","last_fullsized":"2026-03-10T13:37:55.603009+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:15:19.145024+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.779232+0000","last_change":"2026-03-10T13:37:52.289819+0000","last_active":"2026-03-10T13:37:55.779232+0000","last_peered":"2026-03-10T13:37:55.779232+0000","last_clean":"2026-03-10T13:37:55.779232+0000","last_became_active":"2026-03-10T13:37:52.289670+0000","last_became_peered":"2026-03-10T13:37:52.289670+0000","last_unstale":"2026-03-10T13:37:55.779232+0000","last_undegraded":"2026-03-10T13:37:55.779232+0000","last_fullsized":"2026-03-10T13:37:55.779232+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:56:44.344376+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629803+0000","last_change":"2026-03-10T13:37:54.369883+0000","last_active":"2026-03-10T13:37:55.629803+0000","last_peered":"2026-03-10T13:37:55.629803+0000","last_clean":"2026-03-10T13:37:55.629803+0000","last_became_active":"2026-03-10T13:37:54.369793+0000","last_became_peered":"2026-03-10T13:37:54.369793+0000","last_unstale":"2026-03-10T13:37:55.629803+0000","last_undegraded":"2026-03-10T13:37:55.629803+0000","last_fullsized":"2026-03-10T13:37:55.629803+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:15:25.187079+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"58'18","reported_seq":55,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.212989+0000","last_change":"2026-03-10T13:37:50.275279+0000","last_active":"2026-03-10T13:37:56.212989+0000","last_peered":"2026-03-10T13:37:56.212989+0000","last_clean":"2026-03-10T13:37:56.212989+0000","last_became_active":"2026-03-10T13:37:50.274378+0000","last_became_peered":"2026-03-10T13:37:50.274378+0000","last_unstale":"2026-03-10T13:37:56.212989+0000","last_undegraded":"2026-03-10T13:37:56.212989+0000","last_fullsized":"2026-03-10T13:37:56.212989+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:01:52.730993+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603165+0000","last_change":"2026-03-10T13:37:48.255517+0000","last_active":"2026-03-10T13:37:55.603165+0000","last_peered":"2026-03-10T13:37:55.603165+0000","last_clean":"2026-03-10T13:37:55.603165+0000","last_became_active":"2026-03-10T13:37:48.255409+0000","last_became_peered":"2026-03-10T13:37:48.255409+0000","last_unstale":"2026-03-10T13:37:55.603165+0000","last_undegraded":"2026-03-10T13:37:55.603165+0000","last_fullsized":"2026-03-10T13:37:55.603165+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:43.978621+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595954+0000","last_change":"2026-03-10T13:37:52.279050+0000","last_active":"2026-03-10T13:37:55.595954+0000","last_peered":"2026-03-10T13:37:55.595954+0000","last_clean":"2026-03-10T13:37:55.595954+0000","last_became_active":"2026-03-10T13:37:52.278977+0000","last_became_peered":"2026-03-10T13:37:52.278977+0000","last_unstale":"2026-03-10T13:37:55.595954+0000","last_undegraded":"2026-03-10T13:37:55.595954+0000","last_fullsized":"2026-03-10T13:37:55.595954+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:25:25.421952+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595930+0000","last_change":"2026-03-10T13:37:54.363185+0000","last_active":"2026-03-10T13:37:55.595930+0000","last_peered":"2026-03-10T13:37:55.595930+0000","last_clean":"2026-03-10T13:37:55.595930+0000","last_became_active":"2026-03-10T13:37:54.362918+0000","last_became_peered":"2026-03-10T13:37:54.362918+0000","last_unstale":"2026-03-10T13:37:55.595930+0000","last_undegraded":"2026-03-10T13:37:55.595930+0000","last_fullsized":"2026-03-10T13:37:55.595930+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:06:10.006331+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"58'14","reported_seq":44,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.282771+0000","last_change":"2026-03-10T13:37:50.271662+0000","last_active":"2026-03-10T13:37:56.282771+0000","last_peered":"2026-03-10T13:37:56.282771+0000","last_clean":"2026-03-10T13:37:56.282771+0000","last_became_active":"2026-03-10T13:37:50.270857+0000","last_became_peered":"2026-03-10T13:37:50.270857+0000","last_unstale":"2026-03-10T13:37:56.282771+0000","last_undegraded":"2026-03-10T13:37:56.282771+0000","last_fullsized":"2026-03-10T13:37:56.282771+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:14:40.676561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"50'1","reported_seq":28,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605340+0000","last_change":"2026-03-10T13:37:48.266452+0000","last_active":"2026-03-10T13:37:55.605340+0000","last_peered":"2026-03-10T13:37:55.605340+0000","last_clean":"2026-03-10T13:37:55.605340+0000","last_became_active":"2026-03-10T13:37:48.266368+0000","last_became_peered":"2026-03-10T13:37:48.266368+0000","last_unstale":"2026-03-10T13:37:55.605340+0000","last_undegraded":"2026-03-10T13:37:55.605340+0000","last_fullsized":"2026-03-10T13:37:55.605340+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:00:08.857999+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"58'8","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.768443+0000","last_change":"2026-03-10T13:37:52.274749+0000","last_active":"2026-03-10T13:37:55.768443+0000","last_peered":"2026-03-10T13:37:55.768443+0000","last_clean":"2026-03-10T13:37:55.768443+0000","last_became_active":"2026-03-10T13:37:52.274618+0000","last_became_peered":"2026-03-10T13:37:52.274618+0000","last_unstale":"2026-03-10T13:37:55.768443+0000","last_undegraded":"2026-03-10T13:37:55.768443+0000","last_fullsized":"2026-03-10T13:37:55.768443+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:22:32.870258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594322+0000","last_change":"2026-03-10T13:37:54.384659+0000","last_active":"2026-03-10T13:37:55.594322+0000","last_peered":"2026-03-10T13:37:55.594322+0000","last_clean":"2026-03-10T13:37:55.594322+0000","last_became_active":"2026-03-10T13:37:54.384493+0000","last_became_peered":"2026-03-10T13:37:54.384493+0000","last_unstale":"2026-03-10T13:37:55.594322+0000","last_undegraded":"2026-03-10T13:37:55.594322+0000","last_fullsized":"2026-03-10T13:37:55.594322+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:21:40.276801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.257726+0000","last_change":"2026-03-10T13:37:50.285154+0000","last_active":"2026-03-10T13:37:56.257726+0000","last_peered":"2026-03-10T13:37:56.257726+0000","last_clean":"2026-03-10T13:37:56.257726+0000","last_became_active":"2026-03-10T13:37:50.284994+0000","last_became_peered":"2026-03-10T13:37:50.284994+0000","last_unstale":"2026-03-10T13:37:56.257726+0000","last_undegraded":"2026-03-10T13:37:56.257726+0000","last_fullsized":"2026-03-10T13:37:56.257726+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:50:47.061816+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629399+0000","last_change":"2026-03-10T13:37:48.247954+0000","last_active":"2026-03-10T13:37:55.629399+0000","last_peered":"2026-03-10T13:37:55.629399+0000","last_clean":"2026-03-10T13:37:55.629399+0000","last_became_active":"2026-03-10T13:37:48.247663+0000","last_became_peered":"2026-03-10T13:37:48.247663+0000","last_unstale":"2026-03-10T13:37:55.629399+0000","last_undegraded":"2026-03-10T13:37:55.629399+0000","last_fullsized":"2026-03-10T13:37:55.629399+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:47:43.474018+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.781493+0000","last_change":"2026-03-10T13:37:52.287296+0000","last_active":"2026-03-10T13:37:55.781493+0000","last_peered":"2026-03-10T13:37:55.781493+0000","last_clean":"2026-03-10T13:37:55.781493+0000","last_became_active":"2026-03-10T13:37:52.287120+0000","last_became_peered":"2026-03-10T13:37:52.287120+0000","last_unstale":"2026-03-10T13:37:55.781493+0000","last_undegraded":"2026-03-10T13:37:55.781493+0000","last_fullsized":"2026-03-10T13:37:55.781493+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:38.579724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605887+0000","last_change":"2026-03-10T13:37:54.377432+0000","last_active":"2026-03-10T13:37:55.605887+0000","last_peered":"2026-03-10T13:37:55.605887+0000","last_clean":"2026-03-10T13:37:55.605887+0000","last_became_active":"2026-03-10T13:37:54.377248+0000","last_became_peered":"2026-03-10T13:37:54.377248+0000","last_unstale":"2026-03-10T13:37:55.605887+0000","last_undegraded":"2026-03-10T13:37:55.605887+0000","last_fullsized":"2026-03-10T13:37:55.605887+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:09:25.942919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"58'19","reported_seq":59,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.227456+0000","last_change":"2026-03-10T13:37:50.274855+0000","last_active":"2026-03-10T13:37:56.227456+0000","last_peered":"2026-03-10T13:37:56.227456+0000","last_clean":"2026-03-10T13:37:56.227456+0000","last_became_active":"2026-03-10T13:37:50.274520+0000","last_became_peered":"2026-03-10T13:37:50.274520+0000","last_unstale":"2026-03-10T13:37:56.227456+0000","last_undegraded":"2026-03-10T13:37:56.227456+0000","last_fullsized":"2026-03-10T13:37:56.227456+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:52:41.043795+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.620648+0000","last_change":"2026-03-10T13:37:48.244071+0000","last_active":"2026-03-10T13:37:55.620648+0000","last_peered":"2026-03-10T13:37:55.620648+0000","last_clean":"2026-03-10T13:37:55.620648+0000","last_became_active":"2026-03-10T13:37:48.243884+0000","last_became_peered":"2026-03-10T13:37:48.243884+0000","last_unstale":"2026-03-10T13:37:55.620648+0000","last_undegraded":"2026-03-10T13:37:55.620648+0000","last_fullsized":"2026-03-10T13:37:55.620648+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:30:00.549951+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.637294+0000","last_change":"2026-03-10T13:37:52.285298+0000","last_active":"2026-03-10T13:37:55.637294+0000","last_peered":"2026-03-10T13:37:55.637294+0000","last_clean":"2026-03-10T13:37:55.637294+0000","last_became_active":"2026-03-10T13:37:52.285188+0000","last_became_peered":"2026-03-10T13:37:52.285188+0000","last_unstale":"2026-03-10T13:37:55.637294+0000","last_undegraded":"2026-03-10T13:37:55.637294+0000","last_fullsized":"2026-03-10T13:37:55.637294+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:01:28.645328+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617243+0000","last_change":"2026-03-10T13:37:54.383297+0000","last_active":"2026-03-10T13:37:55.617243+0000","last_peered":"2026-03-10T13:37:55.617243+0000","last_clean":"2026-03-10T13:37:55.617243+0000","last_became_active":"2026-03-10T13:37:54.383129+0000","last_became_peered":"2026-03-10T13:37:54.383129+0000","last_unstale":"2026-03-10T13:37:55.617243+0000","last_undegraded":"2026-03-10T13:37:55.617243+0000","last_fullsized":"2026-03-10T13:37:55.617243+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:13:30.685000+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"58'28","reported_seq":74,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.267092+0000","last_change":"2026-03-10T13:37:50.285805+0000","last_active":"2026-03-10T13:37:56.267092+0000","last_peered":"2026-03-10T13:37:56.267092+0000","last_clean":"2026-03-10T13:37:56.267092+0000","last_became_active":"2026-03-10T13:37:50.285723+0000","last_became_peered":"2026-03-10T13:37:50.285723+0000","last_unstale":"2026-03-10T13:37:56.267092+0000","last_undegraded":"2026-03-10T13:37:56.267092+0000","last_fullsized":"2026-03-10T13:37:56.267092+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:19:04.818425+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596599+0000","last_change":"2026-03-10T13:37:48.260328+0000","last_active":"2026-03-10T13:37:55.596599+0000","last_peered":"2026-03-10T13:37:55.596599+0000","last_clean":"2026-03-10T13:37:55.596599+0000","last_became_active":"2026-03-10T13:37:48.260187+0000","last_became_peered":"2026-03-10T13:37:48.260187+0000","last_unstale":"2026-03-10T13:37:55.596599+0000","last_undegraded":"2026-03-10T13:37:55.596599+0000","last_fullsized":"2026-03-10T13:37:55.596599+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:02:20.935175+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"52'2","reported_seq":34,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629469+0000","last_change":"2026-03-10T13:37:50.258883+0000","last_active":"2026-03-10T13:37:55.629469+0000","last_peered":"2026-03-10T13:37:55.629469+0000","last_clean":"2026-03-10T13:37:55.629469+0000","last_became_active":"2026-03-10T13:37:48.244175+0000","last_became_peered":"2026-03-10T13:37:48.244175+0000","last_unstale":"2026-03-10T13:37:55.629469+0000","last_undegraded":"2026-03-10T13:37:55.629469+0000","last_fullsized":"2026-03-10T13:37:55.629469+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:11:44.460102+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00086743300000000003,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605684+0000","last_change":"2026-03-10T13:37:52.337814+0000","last_active":"2026-03-10T13:37:55.605684+0000","last_peered":"2026-03-10T13:37:55.605684+0000","last_clean":"2026-03-10T13:37:55.605684+0000","last_became_active":"2026-03-10T13:37:52.337719+0000","last_became_peered":"2026-03-10T13:37:52.337719+0000","last_unstale":"2026-03-10T13:37:55.605684+0000","last_undegraded":"2026-03-10T13:37:55.605684+0000","last_fullsized":"2026-03-10T13:37:55.605684+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:31:18.102603+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602605+0000","last_change":"2026-03-10T13:37:54.361344+0000","last_active":"2026-03-10T13:37:55.602605+0000","last_peered":"2026-03-10T13:37:55.602605+0000","last_clean":"2026-03-10T13:37:55.602605+0000","last_became_active":"2026-03-10T13:37:54.361141+0000","last_became_peered":"2026-03-10T13:37:54.361141+0000","last_unstale":"2026-03-10T13:37:55.602605+0000","last_undegraded":"2026-03-10T13:37:55.602605+0000","last_fullsized":"2026-03-10T13:37:55.602605+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:24:48.534009+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"58'13","reported_seq":50,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.163793+0000","last_change":"2026-03-10T13:37:50.283610+0000","last_active":"2026-03-10T13:37:56.163793+0000","last_peered":"2026-03-10T13:37:56.163793+0000","last_clean":"2026-03-10T13:37:56.163793+0000","last_became_active":"2026-03-10T13:37:50.279315+0000","last_became_peered":"2026-03-10T13:37:50.279315+0000","last_unstale":"2026-03-10T13:37:56.163793+0000","last_undegraded":"2026-03-10T13:37:56.163793+0000","last_fullsized":"2026-03-10T13:37:56.163793+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:12:19.738249+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617269+0000","last_change":"2026-03-10T13:37:48.243979+0000","last_active":"2026-03-10T13:37:55.617269+0000","last_peered":"2026-03-10T13:37:55.617269+0000","last_clean":"2026-03-10T13:37:55.617269+0000","last_became_active":"2026-03-10T13:37:48.243687+0000","last_became_peered":"2026-03-10T13:37:48.243687+0000","last_unstale":"2026-03-10T13:37:55.617269+0000","last_undegraded":"2026-03-10T13:37:55.617269+0000","last_fullsized":"2026-03-10T13:37:55.617269+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:36:56.568375+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"50'1","reported_seq":33,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674034+0000","last_change":"2026-03-10T13:37:50.255940+0000","last_active":"2026-03-10T13:37:55.674034+0000","last_peered":"2026-03-10T13:37:55.674034+0000","last_clean":"2026-03-10T13:37:55.674034+0000","last_became_active":"2026-03-10T13:37:48.248249+0000","last_became_peered":"2026-03-10T13:37:48.248249+0000","last_unstale":"2026-03-10T13:37:55.674034+0000","last_undegraded":"2026-03-10T13:37:55.674034+0000","last_fullsized":"2026-03-10T13:37:55.674034+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:55:01.231709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00053989999999999995,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674095+0000","last_change":"2026-03-10T13:37:52.286245+0000","last_active":"2026-03-10T13:37:55.674095+0000","last_peered":"2026-03-10T13:37:55.674095+0000","last_clean":"2026-03-10T13:37:55.674095+0000","last_became_active":"2026-03-10T13:37:52.285902+0000","last_became_peered":"2026-03-10T13:37:52.285902+0000","last_unstale":"2026-03-10T13:37:55.674095+0000","last_undegraded":"2026-03-10T13:37:55.674095+0000","last_fullsized":"2026-03-10T13:37:55.674095+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:57:13.600272+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594409+0000","last_change":"2026-03-10T13:37:54.384692+0000","last_active":"2026-03-10T13:37:55.594409+0000","last_peered":"2026-03-10T13:37:55.594409+0000","last_clean":"2026-03-10T13:37:55.594409+0000","last_became_active":"2026-03-10T13:37:54.384612+0000","last_became_peered":"2026-03-10T13:37:54.384612+0000","last_unstale":"2026-03-10T13:37:55.594409+0000","last_undegraded":"2026-03-10T13:37:55.594409+0000","last_fullsized":"2026-03-10T13:37:55.594409+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:36:48.828084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"58'12","reported_seq":41,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.178524+0000","last_change":"2026-03-10T13:37:50.278213+0000","last_active":"2026-03-10T13:37:56.178524+0000","last_peered":"2026-03-10T13:37:56.178524+0000","last_clean":"2026-03-10T13:37:56.178524+0000","last_became_active":"2026-03-10T13:37:50.278032+0000","last_became_peered":"2026-03-10T13:37:50.278032+0000","last_unstale":"2026-03-10T13:37:56.178524+0000","last_undegraded":"2026-03-10T13:37:56.178524+0000","last_fullsized":"2026-03-10T13:37:56.178524+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:08.519664+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605384+0000","last_change":"2026-03-10T13:37:48.263540+0000","last_active":"2026-03-10T13:37:55.605384+0000","last_peered":"2026-03-10T13:37:55.605384+0000","last_clean":"2026-03-10T13:37:55.605384+0000","last_became_active":"2026-03-10T13:37:48.263445+0000","last_became_peered":"2026-03-10T13:37:48.263445+0000","last_unstale":"2026-03-10T13:37:55.605384+0000","last_undegraded":"2026-03-10T13:37:55.605384+0000","last_fullsized":"2026-03-10T13:37:55.605384+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:41:50.127741+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"58'5","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:57.259025+0000","last_change":"2026-03-10T13:37:50.259062+0000","last_active":"2026-03-10T13:37:57.259025+0000","last_peered":"2026-03-10T13:37:57.259025+0000","last_clean":"2026-03-10T13:37:57.259025+0000","last_became_active":"2026-03-10T13:37:48.254460+0000","last_became_peered":"2026-03-10T13:37:48.254460+0000","last_unstale":"2026-03-10T13:37:57.259025+0000","last_undegraded":"2026-03-10T13:37:57.259025+0000","last_fullsized":"2026-03-10T13:37:57.259025+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:10:36.108168+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00078135200000000002,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":2,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629172+0000","last_change":"2026-03-10T13:37:52.338874+0000","last_active":"2026-03-10T13:37:55.629172+0000","last_peered":"2026-03-10T13:37:55.629172+0000","last_clean":"2026-03-10T13:37:55.629172+0000","last_became_active":"2026-03-10T13:37:52.338525+0000","last_became_peered":"2026-03-10T13:37:52.338525+0000","last_unstale":"2026-03-10T13:37:55.629172+0000","last_undegraded":"2026-03-10T13:37:55.629172+0000","last_fullsized":"2026-03-10T13:37:55.629172+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:43:56.051905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617297+0000","last_change":"2026-03-10T13:37:54.362690+0000","last_active":"2026-03-10T13:37:55.617297+0000","last_peered":"2026-03-10T13:37:55.617297+0000","last_clean":"2026-03-10T13:37:55.617297+0000","last_became_active":"2026-03-10T13:37:54.362554+0000","last_became_peered":"2026-03-10T13:37:54.362554+0000","last_unstale":"2026-03-10T13:37:55.617297+0000","last_undegraded":"2026-03-10T13:37:55.617297+0000","last_fullsized":"2026-03-10T13:37:55.617297+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:09:32.394329+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"58'16","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.271005+0000","last_change":"2026-03-10T13:37:50.268616+0000","last_active":"2026-03-10T13:37:56.271005+0000","last_peered":"2026-03-10T13:37:56.271005+0000","last_clean":"2026-03-10T13:37:56.271005+0000","last_became_active":"2026-03-10T13:37:50.268372+0000","last_became_peered":"2026-03-10T13:37:50.268372+0000","last_unstale":"2026-03-10T13:37:56.271005+0000","last_undegraded":"2026-03-10T13:37:56.271005+0000","last_fullsized":"2026-03-10T13:37:56.271005+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:15:44.097040+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603078+0000","last_change":"2026-03-10T13:37:48.245907+0000","last_active":"2026-03-10T13:37:55.603078+0000","last_peered":"2026-03-10T13:37:55.603078+0000","last_clean":"2026-03-10T13:37:55.603078+0000","last_became_active":"2026-03-10T13:37:48.245814+0000","last_became_peered":"2026-03-10T13:37:48.245814+0000","last_unstale":"2026-03-10T13:37:55.603078+0000","last_undegraded":"2026-03-10T13:37:55.603078+0000","last_fullsized":"2026-03-10T13:37:55.603078+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:09:23.223356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"20'32","reported_seq":37,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594968+0000","last_change":"2026-03-10T13:37:46.464837+0000","last_active":"2026-03-10T13:37:55.594968+0000","last_peered":"2026-03-10T13:37:55.594968+0000","last_clean":"2026-03-10T13:37:55.594968+0000","last_became_active":"2026-03-10T13:37:46.156521+0000","last_became_peered":"2026-03-10T13:37:46.156521+0000","last_unstale":"2026-03-10T13:37:55.594968+0000","last_undegraded":"2026-03-10T13:37:55.594968+0000","last_fullsized":"2026-03-10T13:37:55.594968+0000","mapping_epoch":47,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":48,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:36:50.964361+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:36:50.964361+0000","last_clean_scrub_stamp":"2026-03-10T13:36:50.964361+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:13:13.823806+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595007+0000","last_change":"2026-03-10T13:37:52.289669+0000","last_active":"2026-03-10T13:37:55.595007+0000","last_peered":"2026-03-10T13:37:55.595007+0000","last_clean":"2026-03-10T13:37:55.595007+0000","last_became_active":"2026-03-10T13:37:52.289484+0000","last_became_peered":"2026-03-10T13:37:52.289484+0000","last_unstale":"2026-03-10T13:37:55.595007+0000","last_undegraded":"2026-03-10T13:37:55.595007+0000","last_fullsized":"2026-03-10T13:37:55.595007+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:28:32.834254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628972+0000","last_change":"2026-03-10T13:37:54.362550+0000","last_active":"2026-03-10T13:37:55.628972+0000","last_peered":"2026-03-10T13:37:55.628972+0000","last_clean":"2026-03-10T13:37:55.628972+0000","last_became_active":"2026-03-10T13:37:54.361963+0000","last_became_peered":"2026-03-10T13:37:54.361963+0000","last_unstale":"2026-03-10T13:37:55.628972+0000","last_undegraded":"2026-03-10T13:37:55.628972+0000","last_fullsized":"2026-03-10T13:37:55.628972+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:00:38.591531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.276478+0000","last_change":"2026-03-10T13:37:50.283275+0000","last_active":"2026-03-10T13:37:56.276478+0000","last_peered":"2026-03-10T13:37:56.276478+0000","last_clean":"2026-03-10T13:37:56.276478+0000","last_became_active":"2026-03-10T13:37:50.283053+0000","last_became_peered":"2026-03-10T13:37:50.283053+0000","last_unstale":"2026-03-10T13:37:56.276478+0000","last_undegraded":"2026-03-10T13:37:56.276478+0000","last_fullsized":"2026-03-10T13:37:56.276478+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:35:21.514199+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596433+0000","last_change":"2026-03-10T13:37:48.257892+0000","last_active":"2026-03-10T13:37:55.596433+0000","last_peered":"2026-03-10T13:37:55.596433+0000","last_clean":"2026-03-10T13:37:55.596433+0000","last_became_active":"2026-03-10T13:37:48.257821+0000","last_became_peered":"2026-03-10T13:37:48.257821+0000","last_unstale":"2026-03-10T13:37:55.596433+0000","last_undegraded":"2026-03-10T13:37:55.596433+0000","last_fullsized":"2026-03-10T13:37:55.596433+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:34:33.136489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629688+0000","last_change":"2026-03-10T13:37:52.335981+0000","last_active":"2026-03-10T13:37:55.629688+0000","last_peered":"2026-03-10T13:37:55.629688+0000","last_clean":"2026-03-10T13:37:55.629688+0000","last_became_active":"2026-03-10T13:37:52.335837+0000","last_became_peered":"2026-03-10T13:37:52.335837+0000","last_unstale":"2026-03-10T13:37:55.629688+0000","last_undegraded":"2026-03-10T13:37:55.629688+0000","last_fullsized":"2026-03-10T13:37:55.629688+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:40:03.789147+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602498+0000","last_change":"2026-03-10T13:37:54.385071+0000","last_active":"2026-03-10T13:37:55.602498+0000","last_peered":"2026-03-10T13:37:55.602498+0000","last_clean":"2026-03-10T13:37:55.602498+0000","last_became_active":"2026-03-10T13:37:54.384190+0000","last_became_peered":"2026-03-10T13:37:54.384190+0000","last_unstale":"2026-03-10T13:37:55.602498+0000","last_undegraded":"2026-03-10T13:37:55.602498+0000","last_fullsized":"2026-03-10T13:37:55.602498+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:44:44.984953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"58'17","reported_seq":51,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.273132+0000","last_change":"2026-03-10T13:37:50.295146+0000","last_active":"2026-03-10T13:37:56.273132+0000","last_peered":"2026-03-10T13:37:56.273132+0000","last_clean":"2026-03-10T13:37:56.273132+0000","last_became_active":"2026-03-10T13:37:50.294144+0000","last_became_peered":"2026-03-10T13:37:50.294144+0000","last_unstale":"2026-03-10T13:37:56.273132+0000","last_undegraded":"2026-03-10T13:37:56.273132+0000","last_fullsized":"2026-03-10T13:37:56.273132+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:38:13.212439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.636805+0000","last_change":"2026-03-10T13:37:48.262800+0000","last_active":"2026-03-10T13:37:55.636805+0000","last_peered":"2026-03-10T13:37:55.636805+0000","last_clean":"2026-03-10T13:37:55.636805+0000","last_became_active":"2026-03-10T13:37:48.262566+0000","last_became_peered":"2026-03-10T13:37:48.262566+0000","last_unstale":"2026-03-10T13:37:55.636805+0000","last_undegraded":"2026-03-10T13:37:55.636805+0000","last_fullsized":"2026-03-10T13:37:55.636805+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:06:07.061463+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617733+0000","last_change":"2026-03-10T13:37:52.288990+0000","last_active":"2026-03-10T13:37:55.617733+0000","last_peered":"2026-03-10T13:37:55.617733+0000","last_clean":"2026-03-10T13:37:55.617733+0000","last_became_active":"2026-03-10T13:37:52.288809+0000","last_became_peered":"2026-03-10T13:37:52.288809+0000","last_unstale":"2026-03-10T13:37:55.617733+0000","last_undegraded":"2026-03-10T13:37:55.617733+0000","last_fullsized":"2026-03-10T13:37:55.617733+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:02:01.131903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674131+0000","last_change":"2026-03-10T13:37:54.368034+0000","last_active":"2026-03-10T13:37:55.674131+0000","last_peered":"2026-03-10T13:37:55.674131+0000","last_clean":"2026-03-10T13:37:55.674131+0000","last_became_active":"2026-03-10T13:37:54.367917+0000","last_became_peered":"2026-03-10T13:37:54.367917+0000","last_unstale":"2026-03-10T13:37:55.674131+0000","last_undegraded":"2026-03-10T13:37:55.674131+0000","last_fullsized":"2026-03-10T13:37:55.674131+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:47:19.763693+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.193827+0000","last_change":"2026-03-10T13:37:50.276348+0000","last_active":"2026-03-10T13:37:56.193827+0000","last_peered":"2026-03-10T13:37:56.193827+0000","last_clean":"2026-03-10T13:37:56.193827+0000","last_became_active":"2026-03-10T13:37:50.273423+0000","last_became_peered":"2026-03-10T13:37:50.273423+0000","last_unstale":"2026-03-10T13:37:56.193827+0000","last_undegraded":"2026-03-10T13:37:56.193827+0000","last_fullsized":"2026-03-10T13:37:56.193827+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:44:42.040423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602968+0000","last_change":"2026-03-10T13:37:48.259720+0000","last_active":"2026-03-10T13:37:55.602968+0000","last_peered":"2026-03-10T13:37:55.602968+0000","last_clean":"2026-03-10T13:37:55.602968+0000","last_became_active":"2026-03-10T13:37:48.259600+0000","last_became_peered":"2026-03-10T13:37:48.259600+0000","last_unstale":"2026-03-10T13:37:55.602968+0000","last_undegraded":"2026-03-10T13:37:55.602968+0000","last_fullsized":"2026-03-10T13:37:55.602968+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:44:39.981466+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.780243+0000","last_change":"2026-03-10T13:37:52.286033+0000","last_active":"2026-03-10T13:37:55.780243+0000","last_peered":"2026-03-10T13:37:55.780243+0000","last_clean":"2026-03-10T13:37:55.780243+0000","last_became_active":"2026-03-10T13:37:52.285743+0000","last_became_peered":"2026-03-10T13:37:52.285743+0000","last_unstale":"2026-03-10T13:37:55.780243+0000","last_undegraded":"2026-03-10T13:37:55.780243+0000","last_fullsized":"2026-03-10T13:37:55.780243+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:21:56.418901+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596041+0000","last_change":"2026-03-10T13:37:54.348661+0000","last_active":"2026-03-10T13:37:55.596041+0000","last_peered":"2026-03-10T13:37:55.596041+0000","last_clean":"2026-03-10T13:37:55.596041+0000","last_became_active":"2026-03-10T13:37:54.348583+0000","last_became_peered":"2026-03-10T13:37:54.348583+0000","last_unstale":"2026-03-10T13:37:55.596041+0000","last_undegraded":"2026-03-10T13:37:55.596041+0000","last_fullsized":"2026-03-10T13:37:55.596041+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:05.306147+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.222035+0000","last_change":"2026-03-10T13:37:50.278281+0000","last_active":"2026-03-10T13:37:56.222035+0000","last_peered":"2026-03-10T13:37:56.222035+0000","last_clean":"2026-03-10T13:37:56.222035+0000","last_became_active":"2026-03-10T13:37:50.278147+0000","last_became_peered":"2026-03-10T13:37:50.278147+0000","last_unstale":"2026-03-10T13:37:56.222035+0000","last_undegraded":"2026-03-10T13:37:56.222035+0000","last_fullsized":"2026-03-10T13:37:56.222035+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:39:52.781786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629307+0000","last_change":"2026-03-10T13:37:48.248020+0000","last_active":"2026-03-10T13:37:55.629307+0000","last_peered":"2026-03-10T13:37:55.629307+0000","last_clean":"2026-03-10T13:37:55.629307+0000","last_became_active":"2026-03-10T13:37:48.247807+0000","last_became_peered":"2026-03-10T13:37:48.247807+0000","last_unstale":"2026-03-10T13:37:55.629307+0000","last_undegraded":"2026-03-10T13:37:55.629307+0000","last_fullsized":"2026-03-10T13:37:55.629307+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:06:33.699352+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674311+0000","last_change":"2026-03-10T13:37:52.274397+0000","last_active":"2026-03-10T13:37:55.674311+0000","last_peered":"2026-03-10T13:37:55.674311+0000","last_clean":"2026-03-10T13:37:55.674311+0000","last_became_active":"2026-03-10T13:37:52.274279+0000","last_became_peered":"2026-03-10T13:37:52.274279+0000","last_unstale":"2026-03-10T13:37:55.674311+0000","last_undegraded":"2026-03-10T13:37:55.674311+0000","last_fullsized":"2026-03-10T13:37:55.674311+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:08:13.228403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605780+0000","last_change":"2026-03-10T13:37:54.369365+0000","last_active":"2026-03-10T13:37:55.605780+0000","last_peered":"2026-03-10T13:37:55.605780+0000","last_clean":"2026-03-10T13:37:55.605780+0000","last_became_active":"2026-03-10T13:37:54.369229+0000","last_became_peered":"2026-03-10T13:37:54.369229+0000","last_unstale":"2026-03-10T13:37:55.605780+0000","last_undegraded":"2026-03-10T13:37:55.605780+0000","last_fullsized":"2026-03-10T13:37:55.605780+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:59:39.836675+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"58'19","reported_seq":54,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.306229+0000","last_change":"2026-03-10T13:37:50.280045+0000","last_active":"2026-03-10T13:37:56.306229+0000","last_peered":"2026-03-10T13:37:56.306229+0000","last_clean":"2026-03-10T13:37:56.306229+0000","last_became_active":"2026-03-10T13:37:50.279899+0000","last_became_peered":"2026-03-10T13:37:50.279899+0000","last_unstale":"2026-03-10T13:37:56.306229+0000","last_undegraded":"2026-03-10T13:37:56.306229+0000","last_fullsized":"2026-03-10T13:37:56.306229+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:30:39.883196+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594929+0000","last_change":"2026-03-10T13:37:48.243521+0000","last_active":"2026-03-10T13:37:55.594929+0000","last_peered":"2026-03-10T13:37:55.594929+0000","last_clean":"2026-03-10T13:37:55.594929+0000","last_became_active":"2026-03-10T13:37:48.243429+0000","last_became_peered":"2026-03-10T13:37:48.243429+0000","last_unstale":"2026-03-10T13:37:55.594929+0000","last_undegraded":"2026-03-10T13:37:55.594929+0000","last_fullsized":"2026-03-10T13:37:55.594929+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:31:48.610497+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674341+0000","last_change":"2026-03-10T13:37:52.286368+0000","last_active":"2026-03-10T13:37:55.674341+0000","last_peered":"2026-03-10T13:37:55.674341+0000","last_clean":"2026-03-10T13:37:55.674341+0000","last_became_active":"2026-03-10T13:37:52.286142+0000","last_became_peered":"2026-03-10T13:37:52.286142+0000","last_unstale":"2026-03-10T13:37:55.674341+0000","last_undegraded":"2026-03-10T13:37:55.674341+0000","last_fullsized":"2026-03-10T13:37:55.674341+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:41:41.893089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594907+0000","last_change":"2026-03-10T13:37:54.369328+0000","last_active":"2026-03-10T13:37:55.594907+0000","last_peered":"2026-03-10T13:37:55.594907+0000","last_clean":"2026-03-10T13:37:55.594907+0000","last_became_active":"2026-03-10T13:37:54.369158+0000","last_became_peered":"2026-03-10T13:37:54.369158+0000","last_unstale":"2026-03-10T13:37:55.594907+0000","last_undegraded":"2026-03-10T13:37:55.594907+0000","last_fullsized":"2026-03-10T13:37:55.594907+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:02:02.482736+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"58'12","reported_seq":46,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.990102+0000","last_change":"2026-03-10T13:37:50.283201+0000","last_active":"2026-03-10T13:37:55.990102+0000","last_peered":"2026-03-10T13:37:55.990102+0000","last_clean":"2026-03-10T13:37:55.990102+0000","last_became_active":"2026-03-10T13:37:50.282949+0000","last_became_peered":"2026-03-10T13:37:50.282949+0000","last_unstale":"2026-03-10T13:37:55.990102+0000","last_undegraded":"2026-03-10T13:37:55.990102+0000","last_fullsized":"2026-03-10T13:37:55.990102+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:06:55.563842+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594816+0000","last_change":"2026-03-10T13:37:48.257038+0000","last_active":"2026-03-10T13:37:55.594816+0000","last_peered":"2026-03-10T13:37:55.594816+0000","last_clean":"2026-03-10T13:37:55.594816+0000","last_became_active":"2026-03-10T13:37:48.256790+0000","last_became_peered":"2026-03-10T13:37:48.256790+0000","last_unstale":"2026-03-10T13:37:55.594816+0000","last_undegraded":"2026-03-10T13:37:55.594816+0000","last_fullsized":"2026-03-10T13:37:55.594816+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:23:30.297413+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.674285+0000","last_change":"2026-03-10T13:37:52.284884+0000","last_active":"2026-03-10T13:37:55.674285+0000","last_peered":"2026-03-10T13:37:55.674285+0000","last_clean":"2026-03-10T13:37:55.674285+0000","last_became_active":"2026-03-10T13:37:52.284804+0000","last_became_peered":"2026-03-10T13:37:52.284804+0000","last_unstale":"2026-03-10T13:37:55.674285+0000","last_undegraded":"2026-03-10T13:37:55.674285+0000","last_fullsized":"2026-03-10T13:37:55.674285+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:37:38.943911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602702+0000","last_change":"2026-03-10T13:37:54.361277+0000","last_active":"2026-03-10T13:37:55.602702+0000","last_peered":"2026-03-10T13:37:55.602702+0000","last_clean":"2026-03-10T13:37:55.602702+0000","last_became_active":"2026-03-10T13:37:54.361029+0000","last_became_peered":"2026-03-10T13:37:54.361029+0000","last_unstale":"2026-03-10T13:37:55.602702+0000","last_undegraded":"2026-03-10T13:37:55.602702+0000","last_fullsized":"2026-03-10T13:37:55.602702+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:28:30.659041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"58'15","reported_seq":48,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.191145+0000","last_change":"2026-03-10T13:37:50.264100+0000","last_active":"2026-03-10T13:37:56.191145+0000","last_peered":"2026-03-10T13:37:56.191145+0000","last_clean":"2026-03-10T13:37:56.191145+0000","last_became_active":"2026-03-10T13:37:50.263960+0000","last_became_peered":"2026-03-10T13:37:50.263960+0000","last_unstale":"2026-03-10T13:37:56.191145+0000","last_undegraded":"2026-03-10T13:37:56.191145+0000","last_fullsized":"2026-03-10T13:37:56.191145+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:46:00.042637+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"50'2","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.651640+0000","last_change":"2026-03-10T13:37:48.257298+0000","last_active":"2026-03-10T13:37:55.651640+0000","last_peered":"2026-03-10T13:37:55.651640+0000","last_clean":"2026-03-10T13:37:55.651640+0000","last_became_active":"2026-03-10T13:37:48.256876+0000","last_became_peered":"2026-03-10T13:37:48.256876+0000","last_unstale":"2026-03-10T13:37:55.651640+0000","last_undegraded":"2026-03-10T13:37:55.651640+0000","last_fullsized":"2026-03-10T13:37:55.651640+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:55:18.391154+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.797846+0000","last_change":"2026-03-10T13:37:52.277458+0000","last_active":"2026-03-10T13:37:55.797846+0000","last_peered":"2026-03-10T13:37:55.797846+0000","last_clean":"2026-03-10T13:37:55.797846+0000","last_became_active":"2026-03-10T13:37:52.277380+0000","last_became_peered":"2026-03-10T13:37:52.277380+0000","last_unstale":"2026-03-10T13:37:55.797846+0000","last_undegraded":"2026-03-10T13:37:55.797846+0000","last_fullsized":"2026-03-10T13:37:55.797846+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:27.191094+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.628738+0000","last_change":"2026-03-10T13:37:54.386443+0000","last_active":"2026-03-10T13:37:55.628738+0000","last_peered":"2026-03-10T13:37:55.628738+0000","last_clean":"2026-03-10T13:37:55.628738+0000","last_became_active":"2026-03-10T13:37:54.386047+0000","last_became_peered":"2026-03-10T13:37:54.386047+0000","last_unstale":"2026-03-10T13:37:55.628738+0000","last_undegraded":"2026-03-10T13:37:55.628738+0000","last_fullsized":"2026-03-10T13:37:55.628738+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:18:33.277833+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.636798+0000","last_change":"2026-03-10T13:37:48.251649+0000","last_active":"2026-03-10T13:37:55.636798+0000","last_peered":"2026-03-10T13:37:55.636798+0000","last_clean":"2026-03-10T13:37:55.636798+0000","last_became_active":"2026-03-10T13:37:48.251350+0000","last_became_peered":"2026-03-10T13:37:48.251350+0000","last_unstale":"2026-03-10T13:37:55.636798+0000","last_undegraded":"2026-03-10T13:37:55.636798+0000","last_fullsized":"2026-03-10T13:37:55.636798+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:49:22.184263+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"58'6","reported_seq":32,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.208395+0000","last_change":"2026-03-10T13:37:50.284861+0000","last_active":"2026-03-10T13:37:56.208395+0000","last_peered":"2026-03-10T13:37:56.208395+0000","last_clean":"2026-03-10T13:37:56.208395+0000","last_became_active":"2026-03-10T13:37:50.284618+0000","last_became_peered":"2026-03-10T13:37:50.284618+0000","last_unstale":"2026-03-10T13:37:56.208395+0000","last_undegraded":"2026-03-10T13:37:56.208395+0000","last_fullsized":"2026-03-10T13:37:56.208395+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:08:04.316010+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629350+0000","last_change":"2026-03-10T13:37:52.339052+0000","last_active":"2026-03-10T13:37:55.629350+0000","last_peered":"2026-03-10T13:37:55.629350+0000","last_clean":"2026-03-10T13:37:55.629350+0000","last_became_active":"2026-03-10T13:37:52.338634+0000","last_became_peered":"2026-03-10T13:37:52.338634+0000","last_unstale":"2026-03-10T13:37:55.629350+0000","last_undegraded":"2026-03-10T13:37:55.629350+0000","last_fullsized":"2026-03-10T13:37:55.629350+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:57:28.989203+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595190+0000","last_change":"2026-03-10T13:37:54.384464+0000","last_active":"2026-03-10T13:37:55.595190+0000","last_peered":"2026-03-10T13:37:55.595190+0000","last_clean":"2026-03-10T13:37:55.595190+0000","last_became_active":"2026-03-10T13:37:54.384376+0000","last_became_peered":"2026-03-10T13:37:54.384376+0000","last_unstale":"2026-03-10T13:37:55.595190+0000","last_undegraded":"2026-03-10T13:37:55.595190+0000","last_fullsized":"2026-03-10T13:37:55.595190+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:51:50.853700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.016298+0000","last_change":"2026-03-10T13:37:50.274920+0000","last_active":"2026-03-10T13:37:56.016298+0000","last_peered":"2026-03-10T13:37:56.016298+0000","last_clean":"2026-03-10T13:37:56.016298+0000","last_became_active":"2026-03-10T13:37:50.274656+0000","last_became_peered":"2026-03-10T13:37:50.274656+0000","last_unstale":"2026-03-10T13:37:56.016298+0000","last_undegraded":"2026-03-10T13:37:56.016298+0000","last_fullsized":"2026-03-10T13:37:56.016298+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:58:41.517370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594683+0000","last_change":"2026-03-10T13:37:48.265357+0000","last_active":"2026-03-10T13:37:55.594683+0000","last_peered":"2026-03-10T13:37:55.594683+0000","last_clean":"2026-03-10T13:37:55.594683+0000","last_became_active":"2026-03-10T13:37:48.264483+0000","last_became_peered":"2026-03-10T13:37:48.264483+0000","last_unstale":"2026-03-10T13:37:55.594683+0000","last_undegraded":"2026-03-10T13:37:55.594683+0000","last_fullsized":"2026-03-10T13:37:55.594683+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:36:39.093212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602745+0000","last_change":"2026-03-10T13:37:52.270195+0000","last_active":"2026-03-10T13:37:55.602745+0000","last_peered":"2026-03-10T13:37:55.602745+0000","last_clean":"2026-03-10T13:37:55.602745+0000","last_became_active":"2026-03-10T13:37:52.270078+0000","last_became_peered":"2026-03-10T13:37:52.270078+0000","last_unstale":"2026-03-10T13:37:55.602745+0000","last_undegraded":"2026-03-10T13:37:55.602745+0000","last_fullsized":"2026-03-10T13:37:55.602745+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:37:41.644438+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.673753+0000","last_change":"2026-03-10T13:37:54.357080+0000","last_active":"2026-03-10T13:37:55.673753+0000","last_peered":"2026-03-10T13:37:55.673753+0000","last_clean":"2026-03-10T13:37:55.673753+0000","last_became_active":"2026-03-10T13:37:54.356988+0000","last_became_peered":"2026-03-10T13:37:54.356988+0000","last_unstale":"2026-03-10T13:37:55.673753+0000","last_undegraded":"2026-03-10T13:37:55.673753+0000","last_fullsized":"2026-03-10T13:37:55.673753+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:49:57.476414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.246001+0000","last_change":"2026-03-10T13:37:50.274156+0000","last_active":"2026-03-10T13:37:56.246001+0000","last_peered":"2026-03-10T13:37:56.246001+0000","last_clean":"2026-03-10T13:37:56.246001+0000","last_became_active":"2026-03-10T13:37:50.274059+0000","last_became_peered":"2026-03-10T13:37:50.274059+0000","last_unstale":"2026-03-10T13:37:56.246001+0000","last_undegraded":"2026-03-10T13:37:56.246001+0000","last_fullsized":"2026-03-10T13:37:56.246001+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:26.693385+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605314+0000","last_change":"2026-03-10T13:37:48.258219+0000","last_active":"2026-03-10T13:37:55.605314+0000","last_peered":"2026-03-10T13:37:55.605314+0000","last_clean":"2026-03-10T13:37:55.605314+0000","last_became_active":"2026-03-10T13:37:48.258120+0000","last_became_peered":"2026-03-10T13:37:48.258120+0000","last_unstale":"2026-03-10T13:37:55.605314+0000","last_undegraded":"2026-03-10T13:37:55.605314+0000","last_fullsized":"2026-03-10T13:37:55.605314+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:09:26.202966+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.792802+0000","last_change":"2026-03-10T13:37:52.266948+0000","last_active":"2026-03-10T13:37:55.792802+0000","last_peered":"2026-03-10T13:37:55.792802+0000","last_clean":"2026-03-10T13:37:55.792802+0000","last_became_active":"2026-03-10T13:37:52.266858+0000","last_became_peered":"2026-03-10T13:37:52.266858+0000","last_unstale":"2026-03-10T13:37:55.792802+0000","last_undegraded":"2026-03-10T13:37:55.792802+0000","last_fullsized":"2026-03-10T13:37:55.792802+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:05:08.260539+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"58'1","reported_seq":16,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.595835+0000","last_change":"2026-03-10T13:37:54.368333+0000","last_active":"2026-03-10T13:37:55.595835+0000","last_peered":"2026-03-10T13:37:55.595835+0000","last_clean":"2026-03-10T13:37:55.595835+0000","last_became_active":"2026-03-10T13:37:54.368106+0000","last_became_peered":"2026-03-10T13:37:54.368106+0000","last_unstale":"2026-03-10T13:37:55.595835+0000","last_undegraded":"2026-03-10T13:37:55.595835+0000","last_fullsized":"2026-03-10T13:37:55.595835+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:27:14.776182+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"58'10","reported_seq":38,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.230485+0000","last_change":"2026-03-10T13:37:50.284797+0000","last_active":"2026-03-10T13:37:56.230485+0000","last_peered":"2026-03-10T13:37:56.230485+0000","last_clean":"2026-03-10T13:37:56.230485+0000","last_became_active":"2026-03-10T13:37:50.284487+0000","last_became_peered":"2026-03-10T13:37:50.284487+0000","last_unstale":"2026-03-10T13:37:56.230485+0000","last_undegraded":"2026-03-10T13:37:56.230485+0000","last_fullsized":"2026-03-10T13:37:56.230485+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:13:30.415464+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594619+0000","last_change":"2026-03-10T13:37:48.263778+0000","last_active":"2026-03-10T13:37:55.594619+0000","last_peered":"2026-03-10T13:37:55.594619+0000","last_clean":"2026-03-10T13:37:55.594619+0000","last_became_active":"2026-03-10T13:37:48.263579+0000","last_became_peered":"2026-03-10T13:37:48.263579+0000","last_unstale":"2026-03-10T13:37:55.594619+0000","last_undegraded":"2026-03-10T13:37:55.594619+0000","last_fullsized":"2026-03-10T13:37:55.594619+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:29.980141+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"58'8","reported_seq":30,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.776431+0000","last_change":"2026-03-10T13:37:52.338760+0000","last_active":"2026-03-10T13:37:55.776431+0000","last_peered":"2026-03-10T13:37:55.776431+0000","last_clean":"2026-03-10T13:37:55.776431+0000","last_became_active":"2026-03-10T13:37:52.338399+0000","last_became_peered":"2026-03-10T13:37:52.338399+0000","last_unstale":"2026-03-10T13:37:55.776431+0000","last_undegraded":"2026-03-10T13:37:55.776431+0000","last_fullsized":"2026-03-10T13:37:55.776431+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:57:39.442630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.604991+0000","last_change":"2026-03-10T13:37:54.377632+0000","last_active":"2026-03-10T13:37:55.604991+0000","last_peered":"2026-03-10T13:37:55.604991+0000","last_clean":"2026-03-10T13:37:55.604991+0000","last_became_active":"2026-03-10T13:37:54.377530+0000","last_became_peered":"2026-03-10T13:37:54.377530+0000","last_unstale":"2026-03-10T13:37:55.604991+0000","last_undegraded":"2026-03-10T13:37:55.604991+0000","last_fullsized":"2026-03-10T13:37:55.604991+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:34.057127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.219649+0000","last_change":"2026-03-10T13:37:50.294733+0000","last_active":"2026-03-10T13:37:56.219649+0000","last_peered":"2026-03-10T13:37:56.219649+0000","last_clean":"2026-03-10T13:37:56.219649+0000","last_became_active":"2026-03-10T13:37:50.293901+0000","last_became_peered":"2026-03-10T13:37:50.293901+0000","last_unstale":"2026-03-10T13:37:56.219649+0000","last_undegraded":"2026-03-10T13:37:56.219649+0000","last_fullsized":"2026-03-10T13:37:56.219649+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:32:16.590805+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.596378+0000","last_change":"2026-03-10T13:37:48.257752+0000","last_active":"2026-03-10T13:37:55.596378+0000","last_peered":"2026-03-10T13:37:55.596378+0000","last_clean":"2026-03-10T13:37:55.596378+0000","last_became_active":"2026-03-10T13:37:48.256678+0000","last_became_peered":"2026-03-10T13:37:48.256678+0000","last_unstale":"2026-03-10T13:37:55.596378+0000","last_undegraded":"2026-03-10T13:37:55.596378+0000","last_fullsized":"2026-03-10T13:37:55.596378+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T01:13:57.296676+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.616862+0000","last_change":"2026-03-10T13:37:52.288481+0000","last_active":"2026-03-10T13:37:55.616862+0000","last_peered":"2026-03-10T13:37:55.616862+0000","last_clean":"2026-03-10T13:37:55.616862+0000","last_became_active":"2026-03-10T13:37:52.288322+0000","last_became_peered":"2026-03-10T13:37:52.288322+0000","last_unstale":"2026-03-10T13:37:55.616862+0000","last_undegraded":"2026-03-10T13:37:55.616862+0000","last_fullsized":"2026-03-10T13:37:55.616862+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:54:35.899719+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.602534+0000","last_change":"2026-03-10T13:37:54.371650+0000","last_active":"2026-03-10T13:37:55.602534+0000","last_peered":"2026-03-10T13:37:55.602534+0000","last_clean":"2026-03-10T13:37:55.602534+0000","last_became_active":"2026-03-10T13:37:54.371553+0000","last_became_peered":"2026-03-10T13:37:54.371553+0000","last_unstale":"2026-03-10T13:37:55.602534+0000","last_undegraded":"2026-03-10T13:37:55.602534+0000","last_fullsized":"2026-03-10T13:37:55.602534+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:17:43.593519+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"58'9","reported_seq":39,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.116785+0000","last_change":"2026-03-10T13:37:50.285101+0000","last_active":"2026-03-10T13:37:56.116785+0000","last_peered":"2026-03-10T13:37:56.116785+0000","last_clean":"2026-03-10T13:37:56.116785+0000","last_became_active":"2026-03-10T13:37:50.283745+0000","last_became_peered":"2026-03-10T13:37:50.283745+0000","last_unstale":"2026-03-10T13:37:56.116785+0000","last_undegraded":"2026-03-10T13:37:56.116785+0000","last_fullsized":"2026-03-10T13:37:56.116785+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:19:37.172925+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594584+0000","last_change":"2026-03-10T13:37:48.263844+0000","last_active":"2026-03-10T13:37:55.594584+0000","last_peered":"2026-03-10T13:37:55.594584+0000","last_clean":"2026-03-10T13:37:55.594584+0000","last_became_active":"2026-03-10T13:37:48.263702+0000","last_became_peered":"2026-03-10T13:37:48.263702+0000","last_unstale":"2026-03-10T13:37:55.594584+0000","last_undegraded":"2026-03-10T13:37:55.594584+0000","last_fullsized":"2026-03-10T13:37:55.594584+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:45:11.766444+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603374+0000","last_change":"2026-03-10T13:37:52.288313+0000","last_active":"2026-03-10T13:37:55.603374+0000","last_peered":"2026-03-10T13:37:55.603374+0000","last_clean":"2026-03-10T13:37:55.603374+0000","last_became_active":"2026-03-10T13:37:52.287426+0000","last_became_peered":"2026-03-10T13:37:52.287426+0000","last_unstale":"2026-03-10T13:37:55.603374+0000","last_undegraded":"2026-03-10T13:37:55.603374+0000","last_fullsized":"2026-03-10T13:37:55.603374+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:37:27.552906+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605074+0000","last_change":"2026-03-10T13:37:54.368623+0000","last_active":"2026-03-10T13:37:55.605074+0000","last_peered":"2026-03-10T13:37:55.605074+0000","last_clean":"2026-03-10T13:37:55.605074+0000","last_became_active":"2026-03-10T13:37:54.368437+0000","last_became_peered":"2026-03-10T13:37:54.368437+0000","last_unstale":"2026-03-10T13:37:55.605074+0000","last_undegraded":"2026-03-10T13:37:55.605074+0000","last_fullsized":"2026-03-10T13:37:55.605074+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:14:31.149045+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.285128+0000","last_change":"2026-03-10T13:37:50.276082+0000","last_active":"2026-03-10T13:37:56.285128+0000","last_peered":"2026-03-10T13:37:56.285128+0000","last_clean":"2026-03-10T13:37:56.285128+0000","last_became_active":"2026-03-10T13:37:50.275886+0000","last_became_peered":"2026-03-10T13:37:50.275886+0000","last_unstale":"2026-03-10T13:37:56.285128+0000","last_undegraded":"2026-03-10T13:37:56.285128+0000","last_fullsized":"2026-03-10T13:37:56.285128+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-12T00:42:30.504771+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.629199+0000","last_change":"2026-03-10T13:37:48.244177+0000","last_active":"2026-03-10T13:37:55.629199+0000","last_peered":"2026-03-10T13:37:55.629199+0000","last_clean":"2026-03-10T13:37:55.629199+0000","last_became_active":"2026-03-10T13:37:48.244050+0000","last_became_peered":"2026-03-10T13:37:48.244050+0000","last_unstale":"2026-03-10T13:37:55.629199+0000","last_undegraded":"2026-03-10T13:37:55.629199+0000","last_fullsized":"2026-03-10T13:37:55.629199+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:38:21.326943+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594563+0000","last_change":"2026-03-10T13:37:52.277330+0000","last_active":"2026-03-10T13:37:55.594563+0000","last_peered":"2026-03-10T13:37:55.594563+0000","last_clean":"2026-03-10T13:37:55.594563+0000","last_became_active":"2026-03-10T13:37:52.277147+0000","last_became_peered":"2026-03-10T13:37:52.277147+0000","last_unstale":"2026-03-10T13:37:55.594563+0000","last_undegraded":"2026-03-10T13:37:55.594563+0000","last_fullsized":"2026-03-10T13:37:55.594563+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:32:54.063417+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603560+0000","last_change":"2026-03-10T13:37:54.385210+0000","last_active":"2026-03-10T13:37:55.603560+0000","last_peered":"2026-03-10T13:37:55.603560+0000","last_clean":"2026-03-10T13:37:55.603560+0000","last_became_active":"2026-03-10T13:37:54.384453+0000","last_became_peered":"2026-03-10T13:37:54.384453+0000","last_unstale":"2026-03-10T13:37:55.603560+0000","last_undegraded":"2026-03-10T13:37:55.603560+0000","last_fullsized":"2026-03-10T13:37:55.603560+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:06:28.083704+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"58'4","reported_seq":29,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.996704+0000","last_change":"2026-03-10T13:37:50.284946+0000","last_active":"2026-03-10T13:37:55.996704+0000","last_peered":"2026-03-10T13:37:55.996704+0000","last_clean":"2026-03-10T13:37:55.996704+0000","last_became_active":"2026-03-10T13:37:50.284723+0000","last_became_peered":"2026-03-10T13:37:50.284723+0000","last_unstale":"2026-03-10T13:37:55.996704+0000","last_undegraded":"2026-03-10T13:37:55.996704+0000","last_fullsized":"2026-03-10T13:37:55.996704+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:17.113471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605241+0000","last_change":"2026-03-10T13:37:48.259317+0000","last_active":"2026-03-10T13:37:55.605241+0000","last_peered":"2026-03-10T13:37:55.605241+0000","last_clean":"2026-03-10T13:37:55.605241+0000","last_became_active":"2026-03-10T13:37:48.259224+0000","last_became_peered":"2026-03-10T13:37:48.259224+0000","last_unstale":"2026-03-10T13:37:55.605241+0000","last_undegraded":"2026-03-10T13:37:55.605241+0000","last_fullsized":"2026-03-10T13:37:55.605241+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:22:05.122953+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.637109+0000","last_change":"2026-03-10T13:37:52.275662+0000","last_active":"2026-03-10T13:37:55.637109+0000","last_peered":"2026-03-10T13:37:55.637109+0000","last_clean":"2026-03-10T13:37:55.637109+0000","last_became_active":"2026-03-10T13:37:52.275540+0000","last_became_peered":"2026-03-10T13:37:52.275540+0000","last_unstale":"2026-03-10T13:37:55.637109+0000","last_undegraded":"2026-03-10T13:37:55.637109+0000","last_fullsized":"2026-03-10T13:37:55.637109+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:34:45.609077+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.594477+0000","last_change":"2026-03-10T13:37:54.369869+0000","last_active":"2026-03-10T13:37:55.594477+0000","last_peered":"2026-03-10T13:37:55.594477+0000","last_clean":"2026-03-10T13:37:55.594477+0000","last_became_active":"2026-03-10T13:37:54.369779+0000","last_became_peered":"2026-03-10T13:37:54.369779+0000","last_unstale":"2026-03-10T13:37:55.594477+0000","last_undegraded":"2026-03-10T13:37:55.594477+0000","last_fullsized":"2026-03-10T13:37:55.594477+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:15:20.019452+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":15,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.617456+0000","last_change":"2026-03-10T13:37:54.351436+0000","last_active":"2026-03-10T13:37:55.617456+0000","last_peered":"2026-03-10T13:37:55.617456+0000","last_clean":"2026-03-10T13:37:55.617456+0000","last_became_active":"2026-03-10T13:37:54.351285+0000","last_became_peered":"2026-03-10T13:37:54.351285+0000","last_unstale":"2026-03-10T13:37:55.617456+0000","last_undegraded":"2026-03-10T13:37:55.617456+0000","last_fullsized":"2026-03-10T13:37:55.617456+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:53.256597+0000","last_clean_scrub_stamp":"2026-03-10T13:37:53.256597+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:45:17.347531+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.603033+0000","last_change":"2026-03-10T13:37:48.255695+0000","last_active":"2026-03-10T13:37:55.603033+0000","last_peered":"2026-03-10T13:37:55.603033+0000","last_clean":"2026-03-10T13:37:55.603033+0000","last_became_active":"2026-03-10T13:37:48.255584+0000","last_became_peered":"2026-03-10T13:37:48.255584+0000","last_unstale":"2026-03-10T13:37:55.603033+0000","last_undegraded":"2026-03-10T13:37:55.603033+0000","last_fullsized":"2026-03-10T13:37:55.603033+0000","mapping_epoch":49,"log_start":"0'0","ondisk_log_start":"0'0","created":49,"last_epoch_clean":50,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:47.219639+0000","last_clean_scrub_stamp":"2026-03-10T13:37:47.219639+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:50:23.329268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"58'11","reported_seq":42,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:56.287899+0000","last_change":"2026-03-10T13:37:50.280444+0000","last_active":"2026-03-10T13:37:56.287899+0000","last_peered":"2026-03-10T13:37:56.287899+0000","last_clean":"2026-03-10T13:37:56.287899+0000","last_became_active":"2026-03-10T13:37:50.280101+0000","last_became_peered":"2026-03-10T13:37:50.280101+0000","last_unstale":"2026-03-10T13:37:56.287899+0000","last_undegraded":"2026-03-10T13:37:56.287899+0000","last_fullsized":"2026-03-10T13:37:56.287899+0000","mapping_epoch":51,"log_start":"0'0","ondisk_log_start":"0'0","created":51,"last_epoch_clean":52,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:49.230505+0000","last_clean_scrub_stamp":"2026-03-10T13:37:49.230505+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:12:07.663854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":19,"reported_epoch":58,"state":"active+clean","last_fresh":"2026-03-10T13:37:55.605474+0000","last_change":"2026-03-10T13:37:52.335288+0000","last_active":"2026-03-10T13:37:55.605474+0000","last_peered":"2026-03-10T13:37:55.605474+0000","last_clean":"2026-03-10T13:37:55.605474+0000","last_became_active":"2026-03-10T13:37:52.335140+0000","last_became_peered":"2026-03-10T13:37:52.335140+0000","last_unstale":"2026-03-10T13:37:55.605474+0000","last_undegraded":"2026-03-10T13:37:55.605474+0000","last_fullsized":"2026-03-10T13:37:55.605474+0000","mapping_epoch":53,"log_start":"0'0","ondisk_log_start":"0'0","created":53,"last_epoch_clean":54,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:37:51.251358+0000","last_clean_scrub_stamp":"2026-03-10T13:37:51.251358+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T22:30:19.241412+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":7,"num_read_kb":2,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2777088,"data_stored":2755680,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":46,"seq":197568495621,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27860,"kb_used_data":1028,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939564,"statfs":{"total":21470642176,"available":21442113536,"internally_reserved":0,"allocated":1052672,"data_stored":681753,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":41,"seq":176093659143,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27844,"kb_used_data":1004,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939580,"statfs":{"total":21470642176,"available":21442129920,"internally_reserved":0,"allocated":1028096,"data_stored":679675,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":35,"seq":150323855370,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27408,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940016,"statfs":{"total":21470642176,"available":21442576384,"internally_reserved":0,"allocated":577536,"data_stored":221203,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084300,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27420,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940004,"statfs":{"total":21470642176,"available":21442564096,"internally_reserved":0,"allocated":602112,"data_stored":222006,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":24,"seq":103079215118,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27432,"kb_used_data":588,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939992,"statfs":{"total":21470642176,"available":21442551808,"internally_reserved":0,"allocated":602112,"data_stored":221286,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":17,"seq":73014444048,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27396,"kb_used_data":556,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940028,"statfs":{"total":21470642176,"available":21442588672,"internally_reserved":0,"allocated":569344,"data_stored":220992,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574866,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27452,"kb_used_data":620,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939972,"statfs":{"total":21470642176,"available":21442531328,"internally_reserved":0,"allocated":634880,"data_stored":222616,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705684,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27864,"kb_used_data":1032,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939560,"statfs":{"total":21470642176,"available":21442109440,"internally_reserved":0,"allocated":1056768,"data_stored":681462,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1177,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:38:03.800 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T13:38:03.800 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T13:38:03.800 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T13:38:03.800 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph health --format=json 2026-03-10T13:38:04.029 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='client.14694 v1:192.168.123.105:0/3863319795' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:03 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setuser ceph since I am not root 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:03 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ignoring --setgroup ceph since I am not root 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:03 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:03.982+0000 7f66344e3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:04.028+0000 7f66344e3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='client.14694 v1:192.168.123.105:0/3863319795' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:38:04.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:03 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.049 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 140 op/s 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='client.14694 v1:192.168.123.105:0/3863319795' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:03 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:03 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: ignoring --setuser ceph since I am not root 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:03 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: ignoring --setgroup ceph since I am not root 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:03 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:03.967+0000 7f2c044fc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:38:04.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.010+0000 7f2c044fc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:38:04.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- 192.168.123.105:0/91734461 >> v1:192.168.123.105:6789/0 conn(0x7f14080b9d20 legacy=0x7f14080bc110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:04.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- 192.168.123.105:0/91734461 shutdown_connections 2026-03-10T13:38:04.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- 192.168.123.105:0/91734461 >> 192.168.123.105:0/91734461 conn(0x7f140801a440 msgr2=0x7f140801a850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:04.220 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- 192.168.123.105:0/91734461 shutdown_connections 2026-03-10T13:38:04.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- 192.168.123.105:0/91734461 wait complete. 2026-03-10T13:38:04.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 Processor -- start 2026-03-10T13:38:04.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.220+0000 7f1416260640 1 -- start start 2026-03-10T13:38:04.221 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f1416260640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f140815cab0 con 0x7f14080a5520 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f1416260640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f140815dcb0 con 0x7f14080a4a30 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f1416260640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f140815eeb0 con 0x7f14080b6aa0 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f141525e640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f14080a4a30 0x7f14080b62d0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:42266/0 (socket says 192.168.123.105:42266) 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f141525e640 1 -- 192.168.123.105:0/2538897117 learned_addr learned my addr 192.168.123.105:0/2538897117 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.221+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1375980610 0 0) 0x7f140815cab0 con 0x7f14080a5520 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f13e4003620 con 0x7f14080a5520 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1621586463 0 0) 0x7f140815dcb0 con 0x7f14080a4a30 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f140815cab0 con 0x7f14080a4a30 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2375709871 0 0) 0x7f13e4003620 con 0x7f14080a5520 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f140815dcb0 con 0x7f14080a5520 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1330662363 0 0) 0x7f140815eeb0 con 0x7f14080b6aa0 2026-03-10T13:38:04.222 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f13e4003620 con 0x7f14080b6aa0 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f1410050f30 con 0x7f14080a5520 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3222401849 0 0) 0x7f140815cab0 con 0x7f14080a4a30 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f140815eeb0 con 0x7f14080a4a30 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 4201408205 0 0) 0x7f140815dcb0 con 0x7f14080a5520 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 >> v1:192.168.123.105:6790/0 conn(0x7f14080b6aa0 legacy=0x7f140815b250 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.222+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 >> v1:192.168.123.109:6789/0 conn(0x7f14080a4a30 legacy=0x7f14080b62d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:04.223 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f14081600b0 con 0x7f14080a5520 2026-03-10T13:38:04.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f1416260640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f140815cce0 con 0x7f14080a5520 2026-03-10T13:38:04.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f1416260640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f140815d1a0 con 0x7f14080a5520 2026-03-10T13:38:04.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f1410063510 con 0x7f14080a5520 2026-03-10T13:38:04.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f1410069690 con 0x7f14080a5520 2026-03-10T13:38:04.225 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.223+0000 7f1416260640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f14080a5e20 con 0x7f14080a5520 2026-03-10T13:38:04.227 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.227+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 17) ==== 100065+0+0 (unknown 2677594841 0 0) 0x7f14100630c0 con 0x7f14080a5520 2026-03-10T13:38:04.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.228+0000 7f141525e640 1 -- 192.168.123.105:0/2538897117 >> v1:192.168.123.105:6800/3845654103 conn(0x7f13e40786d0 legacy=0x7f13e407ab90 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v1:192.168.123.105:6800/3845654103 2026-03-10T13:38:04.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.228+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(58..58 src has 1..58) ==== 5922+0+0 (unknown 2820760411 0 0) 0x7f14100f86b0 con 0x7f14080a5520 2026-03-10T13:38:04.228 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.228+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f14100636c0 con 0x7f14080a5520 2026-03-10T13:38:04.356 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.355+0000 7f1416260640 1 -- 192.168.123.105:0/2538897117 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7f14080be6f0 con 0x7f14080a5520 2026-03-10T13:38:04.356 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.356+0000 7f13fe7fc640 1 -- 192.168.123.105:0/2538897117 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (unknown 3730786198 0 4185958460) 0x7f14100c1620 con 0x7f14080a5520 2026-03-10T13:38:04.356 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T13:38:04.356 INFO:teuthology.orchestra.run.vm05.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T13:38:04.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 >> v1:192.168.123.105:6800/3845654103 conn(0x7f13e40786d0 legacy=0x7f13e407ab90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T13:38:04.362 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 >> v1:192.168.123.105:6789/0 conn(0x7f14080a5520 legacy=0x7f14080bbff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T13:38:04.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 shutdown_connections 2026-03-10T13:38:04.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 >> 192.168.123.105:0/2538897117 conn(0x7f140801a440 msgr2=0x7f14080a4240 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T13:38:04.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 shutdown_connections 2026-03-10T13:38:04.363 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-10T13:38:04.362+0000 7f13dffff640 1 -- 192.168.123.105:0/2538897117 wait complete. 2026-03-10T13:38:04.527 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T13:38:04.527 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T13:38:04.527 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T13:38:04.532 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:38:04.532 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T13:38:04.532 DEBUG:teuthology.orchestra.run.vm05:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:38:04.551 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:38:04.551 INFO:teuthology.orchestra.run.vm05.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T13:38:04.551 DEBUG:teuthology.orchestra.run.vm05:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T13:38:04.608 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T13:38:04.608 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T13:38:04.667 INFO:tasks.workunit:timeout=3h 2026-03-10T13:38:04.667 INFO:tasks.workunit:cleanup=True 2026-03-10T13:38:04.668 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T13:38:04.726 INFO:tasks.workunit.client.0.vm05.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T13:38:04.758 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.442+0000 7f2c044fc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:38:04.827 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:04.494+0000 7f66344e3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[58955]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[58955]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2538897117' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[51512]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[51512]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2538897117' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:04.844+0000 7f66344e3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: from numpy import show_config as show_numpy_config 2026-03-10T13:38:05.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:04.934+0000 7f66344e3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:38:05.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:04 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:04.971+0000 7f66344e3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:38:05.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.044+0000 7f66344e3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:04 vm09 ceph-mon[53367]: from='mgr.14150 v1:192.168.123.105:0/791039324' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:04 vm09 ceph-mon[53367]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2538897117' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.758+0000 7f2c044fc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: from numpy import show_config as show_numpy_config 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.843+0000 7f2c044fc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.881+0000 7f2c044fc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:38:05.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:04.950+0000 7f2c044fc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:38:05.738 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.468+0000 7f2c044fc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:38:05.738 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.582+0000 7f2c044fc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:38:05.738 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.622+0000 7f2c044fc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:38:05.738 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.658+0000 7f2c044fc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:38:05.738 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.700+0000 7f2c044fc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:38:05.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.552+0000 7f66344e3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:38:05.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.665+0000 7f66344e3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:38:05.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.706+0000 7f66344e3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:38:05.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.742+0000 7f66344e3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:38:05.823 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.785+0000 7f66344e3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:38:06.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.822+0000 7f66344e3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:38:06.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:05 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:05.996+0000 7f66344e3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:38:06.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.052+0000 7f66344e3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:38:06.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.737+0000 7f2c044fc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:38:06.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.914+0000 7f2c044fc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:38:06.173 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:05 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:05.967+0000 7f2c044fc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:38:06.487 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.201+0000 7f2c044fc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:38:06.554 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.287+0000 7f66344e3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:38:06.750 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.487+0000 7f2c044fc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:38:06.750 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.523+0000 7f2c044fc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:38:06.750 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.562+0000 7f2c044fc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:38:06.750 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.636+0000 7f2c044fc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:38:06.750 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.670+0000 7f2c044fc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:38:06.814 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.554+0000 7f66344e3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:38:06.814 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.589+0000 7f66344e3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:38:06.814 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.628+0000 7f66344e3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:38:06.814 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.700+0000 7f66344e3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:38:06.814 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.736+0000 7f66344e3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:38:07.031 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.749+0000 7f2c044fc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:38:07.031 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.863+0000 7f2c044fc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:38:07.031 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:06 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:06.995+0000 7f2c044fc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:38:07.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.813+0000 7f66344e3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:38:07.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:06 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:06.921+0000 7f66344e3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:38:07.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:07.053+0000 7f66344e3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: Standby manager daemon x restarted 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: Standby manager daemon x started 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:07 vm09 ceph-mon[53367]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:07.031+0000 7f2c044fc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: [10/Mar/2026:13:38:07] ENGINE Bus STARTING 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: CherryPy Checker: 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: The Application mounted at '' has an empty config. 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: [10/Mar/2026:13:38:07] ENGINE Serving on http://:::9283 2026-03-10T13:38:07.424 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 13:38:07 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x[54647]: [10/Mar/2026:13:38:07] ENGINE Bus STARTED 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: Standby manager daemon x restarted 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: Standby manager daemon x started 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[58955]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: Standby manager daemon x restarted 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: Standby manager daemon x started 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T13:38:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:07 vm05 ceph-mon[51512]: from='mgr.? v1:192.168.123.109:0/882704397' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:07.099+0000 7f66344e3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:07] ENGINE Bus STARTING 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: CherryPy Checker: 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: The Application mounted at '' has an empty config. 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:07] ENGINE Serving on http://:::9283 2026-03-10T13:38:07.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:07 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:07] ENGINE Bus STARTED 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: Active manager daemon y restarted 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: Activating manager daemon y 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: mgrmap e19: y(active, starting, since 0.0169293s), standbys: x 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: Manager daemon y is now available 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:08.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: Active manager daemon y restarted 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: Activating manager daemon y 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: mgrmap e19: y(active, starting, since 0.0169293s), standbys: x 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: Manager daemon y is now available 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: mgrmap e18: y(active, since 2m), standbys: x 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: Active manager daemon y restarted 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: Activating manager daemon y 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: osdmap e59: 8 total, 8 up, 8 in 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: mgrmap e19: y(active, starting, since 0.0169293s), standbys: x 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: Manager daemon y is now available 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:08.586 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: [10/Mar/2026:13:38:08] ENGINE Bus STARTING 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: mgrmap e20: y(active, since 1.03074s), standbys: x 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: [10/Mar/2026:13:38:08] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: [10/Mar/2026:13:38:08] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:38:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: [10/Mar/2026:13:38:08] ENGINE Bus STARTED 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: [10/Mar/2026:13:38:08] ENGINE Client ('192.168.123.105', 54318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: [10/Mar/2026:13:38:08] ENGINE Bus STARTING 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: mgrmap e20: y(active, since 1.03074s), standbys: x 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: [10/Mar/2026:13:38:08] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: [10/Mar/2026:13:38:08] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: [10/Mar/2026:13:38:08] ENGINE Bus STARTED 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: [10/Mar/2026:13:38:08] ENGINE Client ('192.168.123.105', 54318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: [10/Mar/2026:13:38:08] ENGINE Bus STARTING 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: mgrmap e20: y(active, since 1.03074s), standbys: x 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: [10/Mar/2026:13:38:08] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: [10/Mar/2026:13:38:08] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: [10/Mar/2026:13:38:08] ENGINE Bus STARTED 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: [10/Mar/2026:13:38:08] ENGINE Client ('192.168.123.105', 54318) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm05:/etc/ceph/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.conf 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:09.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[58955]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[58955]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[58955]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[58955]: Deploying daemon alertmanager.a on vm05 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[58955]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[51512]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[51512]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[51512]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[51512]: Deploying daemon alertmanager.a on vm05 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:10 vm05 ceph-mon[51512]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T13:38:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:38:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:10 vm09 ceph-mon[53367]: Updating vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:10 vm09 ceph-mon[53367]: Updating vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/config/ceph.client.admin.keyring 2026-03-10T13:38:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:10 vm09 ceph-mon[53367]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:10 vm09 ceph-mon[53367]: Deploying daemon alertmanager.a on vm05 2026-03-10T13:38:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:10 vm09 ceph-mon[53367]: mgrmap e21: y(active, since 3s), standbys: x 2026-03-10T13:38:12.447 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:12 vm05 ceph-mon[58955]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:12.447 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:12 vm05 ceph-mon[58955]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T13:38:12.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:12 vm05 ceph-mon[51512]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:12.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:12 vm05 ceph-mon[51512]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T13:38:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:12 vm09 ceph-mon[53367]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:12 vm09 ceph-mon[53367]: mgrmap e22: y(active, since 4s), standbys: x 2026-03-10T13:38:13.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:12 vm05 systemd[1]: Starting Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:13.485 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE Bus STOPPING 2026-03-10T13:38:13.485 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:38:13.485 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE Bus STOPPED 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 podman[89395]: 2026-03-10 13:38:13.088129605 +0000 UTC m=+0.020120076 volume create 6b09f5b376820b5dae55feacd583cea9645c984cebcdb76411b1499f1267e469 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 podman[89395]: 2026-03-10 13:38:13.091755914 +0000 UTC m=+0.023746385 container create d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 podman[89395]: 2026-03-10 13:38:13.147765193 +0000 UTC m=+0.079755674 container init d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 podman[89395]: 2026-03-10 13:38:13.153286058 +0000 UTC m=+0.085276529 container start d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 bash[89395]: d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 podman[89395]: 2026-03-10 13:38:13.079738847 +0000 UTC m=+0.011729329 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 systemd[1]: Started Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.169Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.170Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.171Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.173Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.199Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.199Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.203Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T13:38:13.485 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:13.203Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T13:38:13.831 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE Bus STARTING 2026-03-10T13:38:13.831 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE Serving on http://:::9283 2026-03-10T13:38:13.831 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:13 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:13] ENGINE Bus STARTED 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[51512]: Deploying daemon grafana.a on vm09 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:14 vm05 ceph-mon[58955]: Deploying daemon grafana.a on vm09 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:14 vm09 ceph-mon[53367]: Deploying daemon grafana.a on vm09 2026-03-10T13:38:15.581 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:15 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:15.174Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000388461s 2026-03-10T13:38:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:16 vm05 ceph-mon[51512]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T13:38:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:16 vm05 ceph-mon[58955]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T13:38:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:16 vm09 ceph-mon[53367]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T13:38:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:18 vm09 ceph-mon[53367]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:38:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:18 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:18.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:18 vm05 ceph-mon[58955]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:38:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:18 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:18 vm05 ceph-mon[51512]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:38:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:18 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:20.324 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:20 vm05 ceph-mon[51512]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:20.324 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:38:20.565 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:20 vm09 ceph-mon[53367]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:20.565 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 systemd[1]: Starting Ceph grafana.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:20.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:20 vm05 ceph-mon[58955]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 podman[80411]: 2026-03-10 13:38:20.565702311 +0000 UTC m=+0.019069520 container create 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a, maintainer=Grafana Labs ) 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 podman[80411]: 2026-03-10 13:38:20.616173078 +0000 UTC m=+0.069540297 container init 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a, maintainer=Grafana Labs ) 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 podman[80411]: 2026-03-10 13:38:20.620404697 +0000 UTC m=+0.073771906 container start 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a, maintainer=Grafana Labs ) 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 bash[80411]: 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 podman[80411]: 2026-03-10 13:38:20.557001768 +0000 UTC m=+0.010368986 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 systemd[1]: Started Ceph grafana.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.709990698Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T13:38:20Z 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710124509Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710128797Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710132063Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710134378Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710137824Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710139367Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.71014091Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710142963Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710144637Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.7101461Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710149235Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T13:38:20.816 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710150978Z level=info msg=Target target=[all] 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710153934Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710155537Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.71015704Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710158503Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710159995Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=settings t=2026-03-10T13:38:20.710161549Z level=info msg="App mode production" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore t=2026-03-10T13:38:20.710311158Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore t=2026-03-10T13:38:20.710319225Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.710668848Z level=info msg="Starting DB migrations" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.711321339Z level=info msg="Executing migration" id="create migration_log table" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.71182379Z level=info msg="Migration successfully executed" id="create migration_log table" duration=502.201µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.712446044Z level=info msg="Executing migration" id="create user table" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.712778395Z level=info msg="Migration successfully executed" id="create user table" duration=332.252µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.713233247Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.713581018Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=347.681µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.714086004Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.714394841Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=308.628µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.714887413Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.715231677Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=348.431µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.715758804Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.716070036Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=311.152µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.716520359Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.717444188Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=923.709µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.717892017Z level=info msg="Executing migration" id="create user table v2" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.718204862Z level=info msg="Migration successfully executed" id="create user table v2" duration=312.736µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.718641609Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.718937723Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=295.894µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.719410058Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.719714016Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=303.889µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.720147848Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.720329379Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=181.401µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.720683671Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.72093957Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=255.769µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.721206068Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.721690666Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=481.651µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.722161597Z level=info msg="Executing migration" id="Update user table charset" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.722171094Z level=info msg="Migration successfully executed" id="Update user table charset" duration=9.848µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.722641806Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.723065159Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=420.147µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.723368967Z level=info msg="Executing migration" id="Add missing user data" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.72345117Z level=info msg="Migration successfully executed" id="Add missing user data" duration=86.481µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.723934745Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.72437505Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=440.244µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.724785798Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.725083385Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=297.617µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.725505446Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.726001313Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=496.108µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.72637422Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.729284298Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=2.909756ms 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.729775907Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.730208046Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=431.989µs 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.730627191Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T13:38:20.817 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.730713261Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=86.211µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.731140701Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.731483923Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=343.062µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.731940659Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.732261058Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=320.37µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.732701292Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.732998118Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=296.476µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.733369483Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.733658043Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=288.37µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.734081416Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.734375586Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=294.02µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.734810199Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.735103347Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=292.968µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.735511662Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.73552121Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=10.088µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.735939773Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.736260734Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=320.58µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.736762783Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.7370498Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=284.212µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.737490635Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.737805754Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=314.919µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.738249345Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.738553946Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=304.721µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.738960386Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.740087685Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.127079ms 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.740556523Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.740899305Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=342.761µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.741318199Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.741648818Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=330.439µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.742038076Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.742359027Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=324.347µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.742759757Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.743074776Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=314.859µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.743503298Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.743801146Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=297.717µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.744238024Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.74441787Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=179.727µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.744801157Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.745030616Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=229.32µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.745334236Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.745506287Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=172.021µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.745943475Z level=info msg="Executing migration" id="create star table" 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.746220824Z level=info msg="Migration successfully executed" id="create star table" duration=262.111µs 2026-03-10T13:38:20.818 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.746607458Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.746910014Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=302.316µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.747327045Z level=info msg="Executing migration" id="create org table v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.747628268Z level=info msg="Migration successfully executed" id="create org table v1" duration=301.113µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.748055438Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.748363133Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=307.516µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.74877855Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.74905068Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=268.824µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.749472039Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.749810873Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=338.743µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.75018367Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.750541911Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=357.85µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.750928995Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.751258311Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=329.246µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.751656969Z level=info msg="Executing migration" id="Update org table charset" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.751665855Z level=info msg="Migration successfully executed" id="Update org table charset" duration=9.227µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.752097743Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.752106379Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=8.977µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.752548827Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.752619269Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=74.62µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.753019458Z level=info msg="Executing migration" id="create dashboard table" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.753362229Z level=info msg="Migration successfully executed" id="create dashboard table" duration=342.381µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.753757479Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.754112253Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=354.604µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.754561324Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.754893835Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=335.017µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.755305687Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.75557468Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=268.883µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.755971873Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.756297803Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=325.669µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.756693063Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.757003775Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=310.561µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.757405867Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.759423824Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.017847ms 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.759882703Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.760238469Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=355.746µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.760652103Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.760965269Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=309.249µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.761369354Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.761674286Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=304.64µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.762053335Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.762252567Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=199.092µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.762643069Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.763030695Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=385.771µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.763454588Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.76347741Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=23.193µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.763884271Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.764527215Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=639.787µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.764926472Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.765541694Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=612.316µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.765912477Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.766544511Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=634.818µs 2026-03-10T13:38:20.819 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.766909193Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.767248557Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=339.255µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.767624812Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.768261253Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=636.24µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.768660651Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.768993775Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=333.074µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.769360681Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.769668056Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=307.224µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.770073675Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.770082491Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=27.973µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.77055688Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.77056781Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=11.31µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.770997264Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.771642241Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=644.957µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.772024296Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.772657942Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=633.597µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.773015962Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.773682238Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=666.126µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.774188817Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.774907421Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=718.303µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.775403591Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.775496714Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=93.195µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.775907603Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.776203487Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=295.733µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.776582517Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.776901604Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=319.197µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.777272408Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.777281755Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=9.848µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.777708404Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.77801616Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=307.575µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.778316682Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.778607816Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=290.975µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.778965096Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.780589786Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.62423ms 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.780978384Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.781282282Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=303.808µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.781675799Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.781976561Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=300.673µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.782358135Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.782683375Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=324.959µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.783071992Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.783231951Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=159.85µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.783617202Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.783867701Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=250.409µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.784276666Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.784987366Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=711.751µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.78538988Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.785719357Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=329.447µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.786117642Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.786195627Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=78.167µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.786618569Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.786696245Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=77.565µs 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.787121901Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-10T13:38:20.820 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.787481875Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=359.994µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.787854021Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.788758233Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=904.021µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.789190733Z level=info msg="Executing migration" id="create data_source table" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.789664088Z level=info msg="Migration successfully executed" id="create data_source table" duration=473.316µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.79006559Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.790419743Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=354.234µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.790803361Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.791124051Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=320.35µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.791535331Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.791851954Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=316.562µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.79224594Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.792602468Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=356.107µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.793027223Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.794971932Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.946482ms 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.795399994Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.795746623Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=346.458µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.796153954Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.79649284Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=339.436µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.796899009Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.797244646Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=345.416µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.79766303Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.797935179Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=272.119µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.798344113Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.799115217Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=771.103µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.799537306Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.800300145Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=763.259µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.800695044Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.800703709Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=8.937µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.801141148Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.801242979Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=101.882µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.801674818Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.802426724Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=751.826µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.802810513Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.802898237Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=87.493µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.803331037Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.8034074Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=76.744µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.803819612Z level=info msg="Executing migration" id="Add uid column" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.804566389Z level=info msg="Migration successfully executed" id="Add uid column" duration=746.647µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.804939467Z level=info msg="Executing migration" id="Update uid value" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.805020158Z level=info msg="Migration successfully executed" id="Update uid value" duration=80.891µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.805466714Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.805793234Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=325.82µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.8061812Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.806515517Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=334.096µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.80690218Z level=info msg="Executing migration" id="create api_key table" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.807251534Z level=info msg="Migration successfully executed" id="create api_key table" duration=349.203µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.807663766Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.807997199Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=333.434µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.808372292Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.808681039Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=308.597µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.809088072Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.809426434Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=337.912µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.809869734Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.810243283Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=371.487µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.810680361Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.810990681Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=314.869µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.811393636Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.811745234Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=349.684µs 2026-03-10T13:38:20.821 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.812145153Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.814460947Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.315614ms 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.814889418Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.81520512Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=315.501µs 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.815623793Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.815970782Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=346.869µs 2026-03-10T13:38:20.822 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.816379317Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.816693165Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=313.717µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.825641481Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.825973061Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=333.724µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.826429787Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.826603682Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=173.756µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.827279657Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.8275266Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=246.923µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.828012228Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.828026605Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=14.817µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.82838214Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.829181226Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=799.066µs 2026-03-10T13:38:21.070 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.829570935Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.830383175Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=812.1µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.830787532Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.830858335Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=71.003µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.831302226Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.832162155Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=859.719µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.832564969Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.833793448Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.228129ms 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.834386347Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.834723738Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=337.411µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.83511952Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.835415504Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=293.82µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.835825761Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.836202667Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=374.311µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.836822807Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.837525032Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=703.537µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.837986575Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.838383598Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=396.922µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.838808734Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.83915954Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=350.656µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.839606046Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.839636032Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=30.206µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.840095873Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.840111582Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=15.979µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.840563117Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.841511803Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=948.404µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.84189517Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.842886887Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=990.694µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.843485767Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.843512817Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=29.134µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.844010169Z level=info msg="Executing migration" id="create quota table v1" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.844334806Z level=info msg="Migration successfully executed" id="create quota table v1" duration=322.022µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.844736989Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.845064242Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=327.093µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.845463558Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.845473187Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=9.928µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.845886801Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.846198344Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=311.834µs 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.846579026Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T13:38:21.071 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.846901059Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=321.952µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.847268045Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.848230286Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=961.98µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.848613434Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.848622932Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=9.819µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.849180775Z level=info msg="Executing migration" id="create session table" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.849556658Z level=info msg="Migration successfully executed" id="create session table" duration=375.813µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.849994739Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.850029514Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=35.006µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.850384398Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.850420105Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=35.907µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.850905302Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.851203712Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=298.479µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.851615572Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.851937414Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=321.571µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85234639Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.852356169Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=10.44µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.852835425Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.852845404Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=9.248µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.853183156Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.854133635Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=950.59µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85468144Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.855618574Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=936.172µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.856002853Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85603919Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=36.778µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85646061Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.856495656Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=36.458µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.856907976Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.857256569Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=348.552µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85763077Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.857639506Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=9.097µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.85805846Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.859069172Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.010602ms 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.859543279Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.859606998Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=63.869µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.860033787Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.860963316Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=929.408µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.861351744Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.862304035Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=952.212µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.862662386Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.862686712Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=24.656µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.863142495Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.863554846Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=412.031µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.863956138Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.864332222Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=376.324µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.864702574Z level=info msg="Executing migration" id="create alert table v1" 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.865163046Z level=info msg="Migration successfully executed" id="create alert table v1" duration=459.881µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.86558168Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.865950249Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=368.579µs 2026-03-10T13:38:21.072 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.866366389Z level=info msg="Executing migration" id="add index alert state" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.866683703Z level=info msg="Migration successfully executed" id="add index alert state" duration=317.113µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.867054305Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.867393531Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=339.166µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.867755538Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.868038799Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=283.06µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.868420673Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.868765128Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=342.011µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.869132836Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.869467452Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=334.555µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.869834229Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.872635852Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.801463ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.873037483Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.87335078Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=313.216µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.873728087Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.874048616Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=320.43µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.87442495Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.874557438Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=131.435µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.874922151Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.875154977Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=232.835µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.875624324Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.875932511Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=308.486µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.876333212Z level=info msg="Executing migration" id="Add column is_default" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.877407713Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.07435ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.877782424Z level=info msg="Executing migration" id="Add column frequency" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.879093056Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.309901ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.879545093Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.880750418Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.205135ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.881156549Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.882271165Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.114465ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.882681042Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.883020457Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=339.234µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.88340688Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.883416578Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=9.688µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.883803823Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.883813812Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=9.187µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.884147115Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.88448652Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=339.194µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.88487081Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.885190719Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=319.688µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.885604914Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.885929971Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=324.938µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.886362822Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.886676339Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=313.166µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.887123616Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.887466448Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=342.712µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.887834527Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.888929245Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.09492ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.889335064Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.890441846Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.106621ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.890824391Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.890897709Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=73.467µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.891367339Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.89168915Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=321.911µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.892099979Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.892451728Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=351.528µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.892833843Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.893989927Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.155874ms 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.894412136Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.894436431Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=23.294µs 2026-03-10T13:38:21.073 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.894887156Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.895203648Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=316.341µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.895598106Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.895992364Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=394.249µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.89640726Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.89644444Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=36.698µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.896864206Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.897235711Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=371.316µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.897622435Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.897974333Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=352.599µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.898367499Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.898697006Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=329.377µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.899260771Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.899620053Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=359.021µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.900078622Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.900478119Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=399.167µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.90088518Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.901307652Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=422.29µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.901704905Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.90172881Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=24.195µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.902154788Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.903567652Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.412864ms 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.903987357Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.904348292Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=360.585µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.905542056Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.906896782Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.355006ms 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.907425712Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.907821141Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=396.101µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.908422358Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.908925278Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=502.761µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.909386481Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.90983981Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=453.109µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.910299971Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.913839896Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.539775ms 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.914316327Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.914706218Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=389.641µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.915179212Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.915610329Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=430.837µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.916110095Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.916324005Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=213.971µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.916744281Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.917067626Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=322.352µs 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.917576599Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-10T13:38:21.074 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.917760824Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=184.425µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.918197731Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.919425218Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.226025ms 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.919826881Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.92100709Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.180189ms 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.921441873Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.92181957Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=377.697µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.922265675Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.92263161Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=365.544µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.923063999Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.923167783Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=104.084µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.92364698Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.924857886Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.210655ms 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.925309691Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.925717986Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=407.803µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.926154383Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.926246155Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=91.912µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.926677161Z level=info msg="Executing migration" id="Move region to single row" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.926833042Z level=info msg="Migration successfully executed" id="Move region to single row" duration=155.781µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.92715207Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.927554433Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=401.291µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.927981402Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.928365531Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=383.809µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.928769206Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.929143858Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=373.789µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.929598799Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.930033783Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=434.373µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.930507319Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.930875148Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=367.979µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.931328687Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.931693229Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=364.271µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.932108216Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.932132541Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=24.696µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.932641714Z level=info msg="Executing migration" id="create test_data table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.932972523Z level=info msg="Migration successfully executed" id="create test_data table" duration=330.719µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.933391617Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.933719742Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=328.424µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.934125701Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.934529726Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=404.817µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.934937982Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.93533816Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=400.32µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.935752285Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.93584017Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=87.836µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.936290262Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.936459649Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=169.326µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.93686642Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.936892188Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=26.119µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.937248185Z level=info msg="Executing migration" id="create team table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.937557233Z level=info msg="Migration successfully executed" id="create team table" duration=309.379µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.937960318Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.938404109Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=443.35µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.938841036Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.939232719Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=392.124µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.939648327Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.940962246Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.313579ms 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.941404153Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.941490985Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=87.062µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.941942811Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.942338602Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=395.651µs 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.942767925Z level=info msg="Executing migration" id="create team member table" 2026-03-10T13:38:21.075 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.943082213Z level=info msg="Migration successfully executed" id="create team member table" duration=313.246µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.9435452Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.94391926Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=374.019µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.944322314Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.944694471Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=371.926µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.945119917Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.945504206Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=384.159µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.945923702Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.947308353Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.38456ms 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.94771877Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.949154016Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.435065ms 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.949563744Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.950860862Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.297128ms 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.951302107Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.951722313Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=419.935µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.95216893Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.952575159Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=406.109µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.952994775Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.953440339Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=444.964µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.954089323Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.954506064Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=416.489µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.954938854Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.955331618Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=392.514µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.955719575Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.956104946Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=385.131µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.956530452Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.956948795Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=418.342µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.957387416Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.957783758Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=396.112µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.958188624Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.958435127Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=246.382µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.958885139Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.958986489Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=101.389µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.959293412Z level=info msg="Executing migration" id="create tag table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.959618281Z level=info msg="Migration successfully executed" id="create tag table" duration=323.827µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.960052333Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.960426473Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=374.34µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.960834878Z level=info msg="Executing migration" id="create login attempt table" 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.961130942Z level=info msg="Migration successfully executed" id="create login attempt table" duration=295.954µs 2026-03-10T13:38:21.076 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.961535769Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.9619053Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=369.391µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.962313335Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.96270624Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=393.386µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.963081321Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.966931959Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=3.850367ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.967394574Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.96769171Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=296.515µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.968109482Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.968483212Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=373.65µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.968893199Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.969039833Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=146.785µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.969392844Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.969653292Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=259.897µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.97006944Z level=info msg="Executing migration" id="create user auth table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.970375012Z level=info msg="Migration successfully executed" id="create user auth table" duration=305.362µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.970788256Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.971203783Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=415.568µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.971625783Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.971650068Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=24.716µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.97213723Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.973619835Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.482464ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.974044179Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.975510373Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.465662ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.975910232Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.977329899Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.418715ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.977715139Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.97910484Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.388728ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.979512634Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.979891934Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=379.169µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.980286191Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.981692663Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.406572ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.982100227Z level=info msg="Executing migration" id="create server_lock table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.982433811Z level=info msg="Migration successfully executed" id="create server_lock table" duration=333.393µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.982864145Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.983254396Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=389.85µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.9836624Z level=info msg="Executing migration" id="create user auth token table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.984022263Z level=info msg="Migration successfully executed" id="create user auth token table" duration=359.442µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.984440587Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.98481639Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=375.963µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.985252275Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.985628139Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=375.684µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.986013319Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.986480193Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=466.834µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.986938882Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.9884822Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.543578ms 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.988887118Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.989276977Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=389.128µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.98966852Z level=info msg="Executing migration" id="create cache_data table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.990016221Z level=info msg="Migration successfully executed" id="create cache_data table" duration=347.56µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.990409607Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.990742671Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=333.084µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.991090381Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.991440677Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=350.035µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.991841978Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.992202362Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=360.195µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.992577184Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.992599997Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=23.114µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.99301338Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.993047645Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=34.586µs 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.993337587Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-10T13:38:21.077 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.993689105Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=351.116µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.994070278Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.994465638Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=394.879µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.994861098Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.995261367Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=400.098µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.995653672Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.995677256Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=23.154µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.996093415Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.996451815Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=358.15µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.996826227Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.997221937Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=396.01µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.997820607Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.998193215Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=371.777µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.998609945Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.99899255Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=382.566µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:20 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:20.999362033Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.001012822Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.651411ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.001403562Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.001801527Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=398.225µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.002179535Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.002228286Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=48.851µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.002724454Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.003100098Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=375.814µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.003505757Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.003876821Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=370.863µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.00427112Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.004634008Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=362.679µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.00499819Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.005020883Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=22.923µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.005351341Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.005740971Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=389.851µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.006111003Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.006498798Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=388.216µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.00687327Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.007250576Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=377.165µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.007622331Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.008011931Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=389.74µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.008395539Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.01001018Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.614671ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.010413535Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.01074769Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=336.299µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.011101853Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.011476654Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=374.751µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.011844894Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.020115973Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=8.270357ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.020617021Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.028365433Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.747168ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.028830322Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.029263243Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=432.751µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.029682146Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.030044354Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=361.877µs 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.030478447Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.032147039Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.668923ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.032603533Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.034203619Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.599915ms 2026-03-10T13:38:21.078 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.034683406Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.035061785Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=378.869µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.035437097Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.035858195Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=419.725µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.036306895Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.036740336Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=433.372µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.037290866Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.037825637Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=534.319µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.038237097Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.038260581Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=23.814µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.03869287Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.040538214Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.845464ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.040943452Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.042845032Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.901449ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.0432935Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.045104269Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.809457ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.045550154Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.045907452Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=357.108µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.046316718Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.046735773Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=419.176µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.047137254Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.048879053Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.741579ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.049316022Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.050986999Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.670747ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.051402836Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.051792637Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=389.58µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.052206792Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.053995679Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.788807ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.054410917Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.05607989Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.669013ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.056482142Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.056508863Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=26.65µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.056958745Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.057441469Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=482.563µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.057862778Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.058298082Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=434.993µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.058715062Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.059136491Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=421.98µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.059578498Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.059600639Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=22.342µs 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.060054608Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.061841834Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.787165ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.062280695Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.064071647Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.790933ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.064540945Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.066375308Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.834192ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.066777451Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.06861004Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.832619ms 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.069024768Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T13:38:21.079 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.070832179Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.805999ms 2026-03-10T13:38:21.359 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.071272122Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.071295526Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=23.655µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.081515893Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.08192493Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=409.807µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.082458288Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.084640682Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=2.182696ms 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.086533986Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.086560566Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=27.231µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.088532476Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.090514054Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.981288ms 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.090977461Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.091428476Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=450.805µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.09188451Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.093884702Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.000272ms 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.094365633Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.094694598Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=326.902µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.095158857Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.095622004Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=464.099µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.09606351Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.098030191Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.966311ms 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.098490593Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.098843382Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=352.78µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.09929605Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.099722819Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=426.558µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.100164975Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.100494573Z level=info msg="Migration successfully executed" id="create alert_image table" duration=329.827µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.100975713Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.101425365Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=449.601µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.101866781Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.101891467Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=25.027µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.102356858Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.102779099Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=422.1µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.103242105Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.103690273Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=447.337µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.104123124Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.104314302Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.104757321Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.105013089Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=255.728µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.105484472Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.10589473Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=409.877µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.106321648Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.109700161Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=3.37714ms 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.110353895Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.111134405Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=780.51µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.111803518Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.112492016Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=689.029µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.112925628Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.113328702Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=402.804µs 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.11376585Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T13:38:21.360 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.114197829Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=431.878µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.114653502Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.115090669Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=436.436µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.115517539Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.115528149Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=10.91µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.116002215Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.116027333Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=25.358µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.116384991Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.116535314Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=150.182µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.11697171Z level=info msg="Executing migration" id="create data_keys table" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.117391716Z level=info msg="Migration successfully executed" id="create data_keys table" duration=420.056µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.117820169Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.118158491Z level=info msg="Migration successfully executed" id="create secrets table" duration=338.312µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.118579098Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.131818426Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=13.232975ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.1325047Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.135294332Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.790483ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.135799677Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.135895557Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=96.14µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.136352201Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.145870474Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=9.512382ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.146485556Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.155920834Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.432332ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.156450716Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.15681102Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=360.374µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.157367762Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.157803197Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=435.356µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.158251706Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.15835533Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=105.187µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.158802348Z level=info msg="Executing migration" id="create permission table" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.159167471Z level=info msg="Migration successfully executed" id="create permission table" duration=365.002µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.159651586Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.160047608Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=396.042µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.160466371Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.160906955Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=415.709µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.161321211Z level=info msg="Executing migration" id="create role table" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.161669774Z level=info msg="Migration successfully executed" id="create role table" duration=348.391µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.162075473Z level=info msg="Executing migration" id="add column display_name" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.164249151Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.173368ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.164632859Z level=info msg="Executing migration" id="add column group_name" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.166641408Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.007217ms 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.167025246Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.167443589Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=418.223µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.167838488Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.168474508Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=635.249µs 2026-03-10T13:38:21.361 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.16888661Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.169360897Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=474.237µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.169809066Z level=info msg="Executing migration" id="create team role table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.170152329Z level=info msg="Migration successfully executed" id="create team role table" duration=343.554µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.170643047Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.171080595Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=437.959µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.171545486Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.17200192Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=456.423µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.172471199Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.172915289Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=444.131µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.173347809Z level=info msg="Executing migration" id="create user role table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.17369003Z level=info msg="Migration successfully executed" id="create user role table" duration=342.231µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.174113282Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.174538437Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=426.447µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.174935421Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.175365165Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=428.29µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.17577932Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.176190118Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=410.648µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.17661824Z level=info msg="Executing migration" id="create builtin role table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.176951654Z level=info msg="Migration successfully executed" id="create builtin role table" duration=333.383µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.177356291Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.177770807Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=414.326µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.178198407Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.178629434Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=431.007µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.179066962Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.181444162Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.377009ms 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.18188164Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.182312075Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=430.325µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.182728486Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.183141478Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=412.722µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.183598202Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.184059907Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=460.152µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.184477849Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.184933722Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=455.593µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.185326016Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.185633823Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=307.705µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.186047606Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.186468144Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=420.338µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.186880596Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.189132921Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.252547ms 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.189587511Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.191792158Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.204896ms 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.192205351Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.194429884Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.225676ms 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.194851644Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.19705146Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.199796ms 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.19749981Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.197939794Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=439.832µs 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.198352054Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T13:38:21.362 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.19880887Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=456.425µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.199312712Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.199737206Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=424.575µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.200196847Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.200568382Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=371.385µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.201004979Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.201432019Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=426.789µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.203619914Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.203646403Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=28.253µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.204133205Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.204156177Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=23.093µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.204624133Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.20482525Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=201.398µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.205147973Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.205401327Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=253.564µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.205805443Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.206053347Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=247.953µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.206512738Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.206613075Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=100.338µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.207158155Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.207409416Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=251.583µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.207851584Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.208185378Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=333.744µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.208615804Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.209044015Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=429.283µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.209454343Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.211724872Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.27126ms 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.21215662Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.212181276Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=25.136µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.212645986Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.213065491Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=419.324µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.213523579Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.213937894Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=414.075µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.214368079Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.214783958Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=415.899µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.215238298Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.217602723Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.364465ms 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.218044269Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.218489432Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=445.143µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.21890514Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.219335716Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=430.234µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.219750213Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.225871888Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.119712ms 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.22640132Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.226873824Z level=info msg="Migration successfully executed" id="create correlation v2" duration=472.123µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.227329747Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.22774817Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=419.104µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.22815975Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.228621525Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=461.755µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.229058983Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.229495811Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=437.058µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.229882986Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.230005755Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=122.96µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.230401746Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.230761399Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=358.961µs 2026-03-10T13:38:21.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.231164664Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.233546642Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.381657ms 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.233955638Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.234300913Z level=info msg="Migration successfully executed" id="create entity_events table" duration=345.256µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.234734556Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.235166193Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=431.557µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.235651471Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.235826198Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.236277964Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.236445407Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.236866354Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.237230708Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=364.914µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.237639873Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.238032167Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=392.454µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.238481729Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.238896324Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=414.125µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.239312042Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.239744602Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=431.638µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.240145872Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.240575937Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=430.075µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.240957102Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.241400992Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=443.741µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.241796251Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.242144593Z level=info msg="Migration successfully executed" id="Drop public config table" duration=349.142µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.242585699Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.243027485Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=441.656µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.243454455Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.243880181Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=425.596µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.244292303Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.244729261Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=436.848µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.24513521Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.245577288Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=442.056µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.245993036Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.254091973Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.096882ms 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.254645139Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.257153403Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.509918ms 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.25776649Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.260187861Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.421312ms 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.260657971Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.260787834Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=129.613µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.261296787Z level=info msg="Executing migration" id="add share column" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.263691509Z level=info msg="Migration successfully executed" id="add share column" duration=2.39417ms 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.264149996Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.264274229Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=124.513µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.264758326Z level=info msg="Executing migration" id="create file table" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.265195774Z level=info msg="Migration successfully executed" id="create file table" duration=436.606µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.265630888Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.266088725Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=457.706µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.26654557Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.267055854Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=510.566µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.267552124Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.268019208Z level=info msg="Migration successfully executed" id="create file_meta table" duration=467.214µs 2026-03-10T13:38:21.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.2685385Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.269076687Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=539.84µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.269667283Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.269765867Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=99.196µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.270278597Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.270373824Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=95.929µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.270860716Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.271182177Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=321.381µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.271735463Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.272003524Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=265.567µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.272842865Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.273627033Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=784.228µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.274159159Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.276965771Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.806793ms 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.277522383Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.277672304Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=150.011µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.278256757Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.27889381Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=637.003µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.27952917Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.279851352Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=322.194µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.280466082Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.280651359Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=186.92µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.281227257Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.281547466Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=319.959µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.282150475Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.285104193Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.953658ms 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.28565323Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.288773631Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=3.117495ms 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.289394502Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.289917301Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=523.03µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.290362974Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.316423487Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=26.047398ms 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.317068205Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.317588347Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=518.078µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.318012912Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.318468455Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=453.87µs 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.318891646Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.326749703Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=7.856955ms 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.357205371Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T13:38:21.365 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.360252203Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=3.049337ms 2026-03-10T13:38:21.582 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:21 vm05 bash[89636]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.388108416Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.388485633Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=378.639µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.398478114Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.398672117Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=194.013µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.399269506Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.399460452Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=191.048µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.400048352Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.400232957Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=184.735µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.400769172Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.400946593Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=177.502µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.401500129Z level=info msg="Executing migration" id="create folder table" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.402051982Z level=info msg="Migration successfully executed" id="create folder table" duration=551.914µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.402636155Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.403279089Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=642.713µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.403797208Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.404336177Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=540.342µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.404795627Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.404869846Z level=info msg="Migration successfully executed" id="Update folder title length" duration=74.85µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.40538468Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.405962892Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=578.362µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.406408896Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.406896589Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=487.613µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.407375786Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.407900819Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=525.655µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.408356742Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.408615185Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=258.373µs 2026-03-10T13:38:21.632 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.4090453Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.40923709Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=191.93µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.409696119Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.41019411Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=486.35µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.410646868Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.411154147Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=507.218µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.41163117Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.412126136Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=506.156µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.412622204Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.413130767Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=508.833µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.413603922Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.414079531Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=475.72µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.414557577Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.414981129Z level=info msg="Migration successfully executed" id="create anon_device table" duration=422.801µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.415440299Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.415987903Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=547.615µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.416476027Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.416961406Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=485.69µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.41738618Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.417811135Z level=info msg="Migration successfully executed" id="create signing_key table" duration=424.995µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.418289441Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.418741697Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=452.296µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.419192882Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.419708186Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=515.535µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.420144422Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.420278574Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=134.242µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.420740108Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.423324585Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.584246ms 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.423743068Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.42418221Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=439.292µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.424730836Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.425253174Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=522.428µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.425722813Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.426221807Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=487.011µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.426684102Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.427252115Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=567.753µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.427726362Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.428350731Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=624.87µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.429060008Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.429645764Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=585.697µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.430114301Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.43062072Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=506.468µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.431150741Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.431628526Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=478.327µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.432157616Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.432378498Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=221.515µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.432924621Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.433011755Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=87.544µs 2026-03-10T13:38:21.633 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.433548018Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.436713302Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=3.165293ms 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.437178402Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.439804356Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.625945ms 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.440276661Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.440502934Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=227.495µs 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=migrator t=2026-03-10T13:38:21.440977251Z level=info msg="migrations completed" performed=547 skipped=0 duration=729.684176ms 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore t=2026-03-10T13:38:21.441574961Z level=info msg="Created default organization" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=secrets t=2026-03-10T13:38:21.44217298Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugin.store t=2026-03-10T13:38:21.449381941Z level=info msg="Loading plugins..." 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=local.finder t=2026-03-10T13:38:21.484793925Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugin.store t=2026-03-10T13:38:21.484964705Z level=info msg="Plugins loaded" count=55 duration=35.583443ms 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=query_data t=2026-03-10T13:38:21.486714139Z level=info msg="Query Service initialization" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=live.push_http t=2026-03-10T13:38:21.491313116Z level=info msg="Live Push Gateway initialization" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.migration t=2026-03-10T13:38:21.4932927Z level=info msg=Starting 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.migration t=2026-03-10T13:38:21.493498636Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.migration orgID=1 t=2026-03-10T13:38:21.49367724Z level=info msg="Migrating alerts for organisation" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.migration orgID=1 t=2026-03-10T13:38:21.493998993Z level=info msg="Alerts found to migrate" alerts=0 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.migration t=2026-03-10T13:38:21.494753805Z level=info msg="Completed alerting migration" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.state.manager t=2026-03-10T13:38:21.501356383Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=infra.usagestats.collector t=2026-03-10T13:38:21.50217314Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.datasources t=2026-03-10T13:38:21.503152513Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.datasources t=2026-03-10T13:38:21.507397277Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.alerting t=2026-03-10T13:38:21.511922707Z level=info msg="starting to provision alerting" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.alerting t=2026-03-10T13:38:21.511933217Z level=info msg="finished to provision alerting" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=grafanaStorageLogger t=2026-03-10T13:38:21.512304942Z level=info msg="Storage starting" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=http.server t=2026-03-10T13:38:21.513077969Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=http.server t=2026-03-10T13:38:21.513403308Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.state.manager t=2026-03-10T13:38:21.513488056Z level=info msg="Warming state cache for startup" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.state.manager t=2026-03-10T13:38:21.513662994Z level=info msg="State cache has been initialized" states=0 duration=174.507µs 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.dashboard t=2026-03-10T13:38:21.51433957Z level=info msg="starting to provision dashboards" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore.transactions t=2026-03-10T13:38:21.524125816Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.multiorg.alertmanager t=2026-03-10T13:38:21.524979494Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ngalert.scheduler t=2026-03-10T13:38:21.525051097Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ticker t=2026-03-10T13:38:21.525107052Z level=info msg=starting first_tick=2026-03-10T13:38:30Z 2026-03-10T13:38:21.634 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore.transactions t=2026-03-10T13:38:21.534608644Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 2026-03-10T13:38:21.923 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=provisioning.dashboard t=2026-03-10T13:38:21.633197344Z level=info msg="finished to provision dashboards" 2026-03-10T13:38:21.923 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugins.update.checker t=2026-03-10T13:38:21.63680193Z level=info msg="Update check succeeded" duration=112.456203ms 2026-03-10T13:38:21.923 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=grafana-apiserver t=2026-03-10T13:38:21.734233516Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T13:38:21.923 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:38:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=grafana-apiserver t=2026-03-10T13:38:21.73479698Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T13:38:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:21 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:21 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:21 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:21 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:21 vm09 ceph-mon[53367]: Deploying daemon node-exporter.a on vm05 2026-03-10T13:38:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[58955]: Deploying daemon node-exporter.a on vm05 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:21 vm05 ceph-mon[51512]: Deploying daemon node-exporter.a on vm05 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[58955]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[51512]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:23.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:22 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:23.082 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:22 vm05 bash[89636]: Getting image source signatures 2026-03-10T13:38:23.082 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:22 vm05 bash[89636]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-10T13:38:23.082 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:22 vm05 bash[89636]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-10T13:38:23.082 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:22 vm05 bash[89636]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-10T13:38:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:22 vm09 ceph-mon[53367]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:22 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:22 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:23.503 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:23.176Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002611829s 2026-03-10T13:38:23.831 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 bash[89636]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 bash[89636]: Writing manifest to image destination 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 podman[89636]: 2026-03-10 13:38:23.519296339 +0000 UTC m=+2.198613259 container create 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 podman[89636]: 2026-03-10 13:38:23.512886819 +0000 UTC m=+2.192203750 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 podman[89636]: 2026-03-10 13:38:23.569715933 +0000 UTC m=+2.249032864 container init 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 podman[89636]: 2026-03-10 13:38:23.57443643 +0000 UTC m=+2.253753361 container start 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 bash[89636]: 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 systemd[1]: Started Ceph node-exporter.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.590Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.590Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.591Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T13:38:23.832 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T13:38:23.833 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 13:38:23 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a[89691]: ts=2026-03-10T13:38:23.592Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T13:38:24.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:24 vm09 ceph-mon[53367]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:24.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:24 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:24.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:24 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:24.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:24 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:24.842 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:24 vm09 ceph-mon[53367]: Deploying daemon node-exporter.b on vm09 2026-03-10T13:38:25.173 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:24 vm09 bash[80646]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[51512]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[51512]: Deploying daemon node-exporter.b on vm09 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[58955]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:25.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:24 vm05 ceph-mon[58955]: Deploying daemon node-exporter.b on vm09 2026-03-10T13:38:26.673 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:26 vm09 bash[80646]: Getting image source signatures 2026-03-10T13:38:26.722 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:26 vm09 bash[80646]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-10T13:38:26.722 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:26 vm09 bash[80646]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-10T13:38:26.722 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:26 vm09 bash[80646]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-10T13:38:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:27 vm05 ceph-mon[51512]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:27 vm05 ceph-mon[58955]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-mon[53367]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T13:38:28.156 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 bash[80646]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 bash[80646]: Writing manifest to image destination 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 podman[80646]: 2026-03-10 13:38:27.740480048 +0000 UTC m=+2.791163578 container create e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 podman[80646]: 2026-03-10 13:38:27.733933456 +0000 UTC m=+2.784616995 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 podman[80646]: 2026-03-10 13:38:27.790345863 +0000 UTC m=+2.841029412 container init e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 podman[80646]: 2026-03-10 13:38:27.795507132 +0000 UTC m=+2.846190661 container start e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 bash[80646]: e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 systemd[1]: Started Ceph node-exporter.b for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.803Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.803Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T13:38:28.156 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T13:38:28.157 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 13:38:27 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b[80699]: ts=2026-03-10T13:38:27.804Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T13:38:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:28 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:29.267 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:28 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T13:38:30.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[51512]: Reconfiguring daemon alertmanager.a on vm05 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T13:38:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:29 vm05 ceph-mon[58955]: Reconfiguring daemon alertmanager.a on vm05 2026-03-10T13:38:30.366 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 systemd[1]: Stopping Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:30.366 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[89405]: ts=2026-03-10T13:38:30.354Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T13:38:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:29 vm09 ceph-mon[53367]: Reconfiguring daemon alertmanager.a on vm05 2026-03-10T13:38:30.650 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90291]: 2026-03-10 13:38:30.366626939 +0000 UTC m=+0.029111428 container died d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90291]: 2026-03-10 13:38:30.485365257 +0000 UTC m=+0.147849746 container remove d952ff23a2860f57ef3a4e9593f1995db18b2baa8edc61d5c6f1e502c70368e7 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90291]: 2026-03-10 13:38:30.486428997 +0000 UTC m=+0.148913486 volume remove 6b09f5b376820b5dae55feacd583cea9645c984cebcdb76411b1499f1267e469 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 bash[90291]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@alertmanager.a.service: Deactivated successfully. 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 systemd[1]: Stopped Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:30.651 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 systemd[1]: Starting Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90368]: 2026-03-10 13:38:30.651330503 +0000 UTC m=+0.017336114 volume create 13c89785b291a5cbbaae89e1b8b7c0ff9985e64daf7c99a40b2d45bee0970e09 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90368]: 2026-03-10 13:38:30.654187311 +0000 UTC m=+0.020192922 container create 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90368]: 2026-03-10 13:38:30.698659733 +0000 UTC m=+0.064665354 container init 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90368]: 2026-03-10 13:38:30.704343292 +0000 UTC m=+0.070348904 container start 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 bash[90368]: 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 podman[90368]: 2026-03-10 13:38:30.644601877 +0000 UTC m=+0.010607509 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 systemd[1]: Started Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.722Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.722Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.723Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.723Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.759Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.759Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.763Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T13:38:31.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:30 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:30.763Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T13:38:32.020 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:31 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.020 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:31 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.020 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:31 vm09 ceph-mon[53367]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T13:38:32.020 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:31 vm09 ceph-mon[53367]: Reconfiguring daemon prometheus.a on vm09 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 systemd[1]: Stopping Ceph prometheus.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.837Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[78798]: ts=2026-03-10T13:38:31.838Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 podman[81395]: 2026-03-10 13:38:31.841031243 +0000 UTC m=+0.019960949 container died d50d9e9a3a1be9dc73febb2d4ce31a72c3016239280cb74467303a8538b5a8a7 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 podman[81395]: 2026-03-10 13:38:31.955083943 +0000 UTC m=+0.134013649 container remove d50d9e9a3a1be9dc73febb2d4ce31a72c3016239280cb74467303a8538b5a8a7 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:32.020 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:31 vm09 bash[81395]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[58955]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[58955]: Reconfiguring daemon prometheus.a on vm09 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[51512]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T13:38:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:31 vm05 ceph-mon[51512]: Reconfiguring daemon prometheus.a on vm09 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@prometheus.a.service: Deactivated successfully. 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 systemd[1]: Stopped Ceph prometheus.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 systemd[1]: Starting Ceph prometheus.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 podman[81474]: 2026-03-10 13:38:32.143775302 +0000 UTC m=+0.022993342 container create 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 podman[81474]: 2026-03-10 13:38:32.19973932 +0000 UTC m=+0.078957360 container init 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 podman[81474]: 2026-03-10 13:38:32.205239314 +0000 UTC m=+0.084457354 container start 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 bash[81474]: 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 podman[81474]: 2026-03-10 13:38:32.133564443 +0000 UTC m=+0.012782493 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 systemd[1]: Started Ceph prometheus.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.226Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.226Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.226Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm09 (none))" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.226Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.226Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.233Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.234Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.236Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.236Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.514µs 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.236Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.238Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.238Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.238Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.242Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.242Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=143.238µs wal_replay_duration=5.323313ms wbl_replay_duration=130ns total_replay_duration=5.482781ms 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.243Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.243Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.243Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.253Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=9.117363ms db_storage=882ns remote_storage=1.313µs web_handler=301ns query_engine=561ns scrape=859.699µs scrape_sd=131.816µs notify=6.873µs notify_sd=5.53µs rules=7.665717ms tracing=5.58µs 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.253Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T13:38:32.424 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 13:38:32 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T13:38:32.253Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T13:38:32.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STOPPING 2026-03-10T13:38:32.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:38:32.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STOPPED 2026-03-10T13:38:32.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STARTING 2026-03-10T13:38:32.581 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Serving on http://:::9283 2026-03-10T13:38:32.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STARTED 2026-03-10T13:38:32.582 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STOPPING 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:32 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:32.724Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000881224s 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STOPPED 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:32 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:32] ENGINE Bus STARTING 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:32.947 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:32.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:32 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Serving on http://:::9283 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Bus STARTED 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Bus STOPPING 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Bus STOPPED 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Bus STARTING 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Serving on http://:::9283 2026-03-10T13:38:33.225 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:33 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:13:38:33] ENGINE Bus STARTED 2026-03-10T13:38:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:34.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:33 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm09.local:9095"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm09.local:3000"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:38:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:38:34.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:33 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:38:35.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:34 vm05 ceph-mon[51512]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:35.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:34 vm05 ceph-mon[58955]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:35.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:34 vm09 ceph-mon[53367]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:37.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:36 vm05 ceph-mon[58955]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:37.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:36 vm05 ceph-mon[51512]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:36 vm09 ceph-mon[53367]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:37 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:37 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:38.160 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:37 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:38.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:38 vm09 ceph-mon[53367]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:38 vm05 ceph-mon[51512]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:38 vm05 ceph-mon[58955]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:38:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:40 vm05 ceph-mon[51512]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:40 vm05 ceph-mon[58955]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:41.082 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 13:38:40 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T13:38:40.727Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004007868s 2026-03-10T13:38:41.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:40 vm09 ceph-mon[53367]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:43.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:42 vm05 ceph-mon[51512]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:43.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:42 vm05 ceph-mon[58955]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:42 vm09 ceph-mon[53367]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:43 vm05 ceph-mon[51512]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:43 vm05 ceph-mon[58955]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:43 vm09 ceph-mon[53367]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:46.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:46 vm05 ceph-mon[51512]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:46.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:46 vm05 ceph-mon[58955]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:46 vm09 ceph-mon[53367]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:48.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:48 vm09 ceph-mon[53367]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:48 vm05 ceph-mon[51512]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:49.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:48 vm05 ceph-mon[58955]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:38:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:50 vm05 ceph-mon[51512]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:50 vm05 ceph-mon[58955]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:51.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:50 vm09 ceph-mon[53367]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:52 vm05 ceph-mon[58955]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:52 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:52 vm05 ceph-mon[51512]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:52 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:52 vm09 ceph-mon[53367]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:52 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:38:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:55 vm05 ceph-mon[51512]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:55 vm05 ceph-mon[58955]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:55 vm09 ceph-mon[53367]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:56.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:56 vm05 ceph-mon[51512]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:56 vm05 ceph-mon[58955]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:56 vm09 ceph-mon[53367]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:38:58.531 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:38:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:38:58.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:58 vm05 ceph-mon[51512]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:58.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:58 vm05 ceph-mon[58955]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:58.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:58 vm09 ceph-mon[53367]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:38:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:38:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:38:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:38:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:38:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:38:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:38:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:00 vm09 ceph-mon[53367]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:00 vm05 ceph-mon[51512]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:00 vm05 ceph-mon[58955]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:02.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:02 vm09 ceph-mon[53367]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:02 vm05 ceph-mon[58955]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:02 vm05 ceph-mon[51512]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:04.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:04 vm09 ceph-mon[53367]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:04 vm05 ceph-mon[58955]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:04 vm05 ceph-mon[51512]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:07.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:06 vm05 ceph-mon[51512]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:07.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:06 vm05 ceph-mon[58955]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:06 vm09 ceph-mon[53367]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:08.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]: dispatch 2026-03-10T13:39:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:08.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]': finished 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]': finished 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T13:39:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[51512]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]': finished 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]': finished 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T13:39:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:08 vm05 ceph-mon[58955]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.12", "id": [1, 5]}]': finished 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1c", "id": [1, 5]}]': finished 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.1d", "id": [1, 2]}]': finished 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: osdmap e60: 8 total, 8 up, 8 in 2026-03-10T13:39:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:08 vm09 ceph-mon[53367]: osdmap e61: 8 total, 8 up, 8 in 2026-03-10T13:39:10.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:10.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:10 vm05 ceph-mon[58955]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:10 vm05 ceph-mon[58955]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:39:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:10 vm05 ceph-mon[51512]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:10 vm05 ceph-mon[51512]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:39:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:10 vm09 ceph-mon[53367]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:10 vm09 ceph-mon[53367]: Health check failed: Reduced data availability: 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:39:13.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:12 vm05 ceph-mon[51512]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:12 vm05 ceph-mon[58955]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:12 vm09 ceph-mon[53367]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:39:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:14 vm05 ceph-mon[58955]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:39:15.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:14 vm05 ceph-mon[51512]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:39:15.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:14 vm09 ceph-mon[53367]: pgmap v38: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:39:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:15 vm09 ceph-mon[53367]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T13:39:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:15 vm09 ceph-mon[53367]: Cluster is now healthy 2026-03-10T13:39:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:15 vm05 ceph-mon[51512]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T13:39:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:15 vm05 ceph-mon[51512]: Cluster is now healthy 2026-03-10T13:39:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:15 vm05 ceph-mon[58955]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs peering) 2026-03-10T13:39:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:15 vm05 ceph-mon[58955]: Cluster is now healthy 2026-03-10T13:39:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:16 vm09 ceph-mon[53367]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 54 B/s, 1 objects/s recovering 2026-03-10T13:39:17.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:16 vm05 ceph-mon[51512]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 54 B/s, 1 objects/s recovering 2026-03-10T13:39:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:16 vm05 ceph-mon[58955]: pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 54 B/s, 1 objects/s recovering 2026-03-10T13:39:17.924 INFO:tasks.workunit.client.0.vm05.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T13:39:17.924 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.924 INFO:tasks.workunit.client.0.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T13:39:17.924 INFO:tasks.workunit.client.0.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: git switch -c 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:Or undo this operation with: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: git switch - 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr: 2026-03-10T13:39:17.925 INFO:tasks.workunit.client.0.vm05.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T13:39:17.930 DEBUG:teuthology.orchestra.run.vm05:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T13:39:17.987 INFO:tasks.workunit.client.0.vm05.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T13:39:17.988 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:39:17.988 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T13:39:18.033 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T13:39:18.067 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T13:39:18.098 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T13:39:18.100 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:39:18.100 INFO:tasks.workunit.client.0.vm05.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T13:39:18.136 INFO:tasks.workunit.client.0.vm05.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T13:39:18.139 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T13:39:18.139 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T13:39:18.196 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-10T13:39:18.197 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-10T13:39:18.197 DEBUG:teuthology.orchestra.run.vm05:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ parallel=1 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' '' = --serial ']' 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ crimson=0 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' '' = --crimson ']' 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ color= 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -t 1 ']' 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T13:39:18.258 INFO:tasks.workunit.client.0.vm05.stderr:+ declare -A pids 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_aio 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_aio' 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_aio 2026-03-10T13:39:18.259 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stdout:test api_aio on pid 90961 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_aio 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90961 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_aio on pid 90961' 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=90961 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.261 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_aio_pp 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_aio_pp' 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.262 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.263 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.264 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_aio_pp 2026-03-10T13:39:18.264 INFO:tasks.workunit.client.0.vm05.stdout:test api_aio_pp on pid 90968 2026-03-10T13:39:18.264 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_aio_pp 2026-03-10T13:39:18.264 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90968 2026-03-10T13:39:18.264 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_aio_pp on pid 90968' 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=90968 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_io 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_io' 2026-03-10T13:39:18.265 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.266 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.267 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_io 2026-03-10T13:39:18.268 INFO:tasks.workunit.client.0.vm05.stdout:test api_io on pid 90978 2026-03-10T13:39:18.268 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_io 2026-03-10T13:39:18.268 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90978 2026-03-10T13:39:18.268 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_io on pid 90978' 2026-03-10T13:39:18.268 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=90978 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_io_pp 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_io_pp' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.269 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.270 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.271 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.272 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.273 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_io_pp 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stdout:test api_io_pp on pid 90989 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_io_pp 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90989 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_io_pp on pid 90989' 2026-03-10T13:39:18.274 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=90989 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.275 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_asio 2026-03-10T13:39:18.277 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_asio' 2026-03-10T13:39:18.279 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.282 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.282 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.284 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_asio 2026-03-10T13:39:18.286 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.286 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/90961/exe ']' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stdout:test api_asio on pid 91006 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_asio 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91006 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_asio on pid 91006' 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91006 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.287 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.288 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_list 2026-03-10T13:39:18.288 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_list' 2026-03-10T13:39:18.288 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.289 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/90961/exe 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.290 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.291 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.292 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.293 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_aio: /' 2026-03-10T13:39:18.294 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-10T13:39:18.295 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.296 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.299 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.299 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.300 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.301 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.301 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_list 2026-03-10T13:39:18.301 INFO:tasks.workunit.client.0.vm05.stdout:test api_list on pid 91033 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_list 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91033 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_list on pid 91033' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91033 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.302 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/90989/exe ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/90968/exe ']' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_lock 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_lock' 2026-03-10T13:39:18.303 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.304 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.305 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/90989/exe 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.306 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.308 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-10T13:39:18.309 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-10T13:39:18.309 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-10T13:39:18.311 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.312 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/90968/exe 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.313 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.314 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.314 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.314 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.314 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_lock 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/90978/exe ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_lock 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stdout:test api_lock on pid 91055 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91055 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_lock on pid 91055' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91055 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.317 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.318 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-10T13:39:18.319 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-10T13:39:18.321 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-10T13:39:18.321 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.321 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.322 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.322 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.322 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.322 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.323 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_lock_pp 2026-03-10T13:39:18.323 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_lock_pp' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.326 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.328 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/90978/exe 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.331 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.332 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.333 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.337 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.338 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.340 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_lock_pp 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.341 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.342 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-10T13:39:18.342 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_io.log 2026-03-10T13:39:18.342 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_io: /' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.346 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stdout:test api_lock_pp on pid 91097 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_lock_pp 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91097 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_lock_pp on pid 91097' 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91097 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.347 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.352 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_misc 2026-03-10T13:39:18.352 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_misc' 2026-03-10T13:39:18.352 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.352 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.353 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.355 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91006/exe ']' 2026-03-10T13:39:18.359 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.360 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.361 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.361 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.361 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.361 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.361 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.362 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_misc 2026-03-10T13:39:18.362 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91006/exe 2026-03-10T13:39:18.362 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.364 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.365 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.366 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.367 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stdout:test api_misc on pid 91137 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_misc 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91137 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_misc on pid 91137' 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91137 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.369 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.370 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91033/exe ']' 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_misc_pp 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_misc_pp' 2026-03-10T13:39:18.372 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_asio: /' 2026-03-10T13:39:18.375 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.379 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91055/exe ']' 2026-03-10T13:39:18.380 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91033/exe 2026-03-10T13:39:18.381 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_misc_pp 2026-03-10T13:39:18.381 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.382 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_list: /' 2026-03-10T13:39:18.383 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_list.log 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.384 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.385 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.385 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.385 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.389 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_misc_pp 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stdout:test api_misc_pp on pid 91173 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91173 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_misc_pp on pid 91173' 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91173 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.391 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.393 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-10T13:39:18.393 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_tier_pp 2026-03-10T13:39:18.395 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.395 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_tier_pp' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.396 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.397 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.399 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91055/exe 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.400 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.401 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.403 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.406 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_tier_pp 2026-03-10T13:39:18.408 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-10T13:39:18.408 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.408 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_lock: /' 2026-03-10T13:39:18.409 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-10T13:39:18.409 INFO:tasks.workunit.client.0.vm05.stdout:test api_tier_pp on pid 91197 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_tier_pp 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91197 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_tier_pp on pid 91197' 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91197 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.410 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.411 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.415 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91097/exe ']' 2026-03-10T13:39:18.417 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91097/exe 2026-03-10T13:39:18.417 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.418 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-10T13:39:18.419 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-10T13:39:18.419 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_pool 2026-03-10T13:39:18.420 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_pool' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.422 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.424 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.430 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.432 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_pool 2026-03-10T13:39:18.432 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stdout:test api_pool on pid 91232 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_pool 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91232 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_pool on pid 91232' 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91232 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.440 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.442 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.443 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_snapshots 2026-03-10T13:39:18.444 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-10T13:39:18.445 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_snapshots' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.446 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.448 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.449 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.450 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.450 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.450 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.450 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.450 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.451 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.451 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.452 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.452 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.452 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.453 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91197/exe ']' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.454 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.455 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91197/exe 2026-03-10T13:39:18.456 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.459 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.460 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_snapshots 2026-03-10T13:39:18.460 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.461 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.461 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.461 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.461 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.461 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.463 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stdout:test api_snapshots on pid 91289 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_snapshots 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91289 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_snapshots on pid 91289' 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91289 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.464 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.465 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.466 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91137/exe ']' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-10T13:39:18.467 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.470 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-10T13:39:18.471 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91137/exe 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.472 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.475 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.479 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_snapshots_pp 2026-03-10T13:39:18.480 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_snapshots_pp' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.484 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.485 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.486 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91289/exe ']' 2026-03-10T13:39:18.487 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.488 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91173/exe ']' 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91289/exe 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.489 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-10T13:39:18.490 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91173/exe 2026-03-10T13:39:18.491 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-10T13:39:18.491 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-10T13:39:18.492 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_misc: /' 2026-03-10T13:39:18.492 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.493 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.493 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.493 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.494 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.495 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_snapshots_pp 2026-03-10T13:39:18.495 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.495 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-10T13:39:18.496 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-10T13:39:18.496 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.498 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_snapshots_pp 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stdout:test api_snapshots_pp on pid 91344 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91344 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_snapshots_pp on pid 91344' 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91344 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.500 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.509 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-10T13:39:18.510 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_stat 2026-03-10T13:39:18.510 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_stat' 2026-03-10T13:39:18.517 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.517 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.518 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.518 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.518 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.518 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.521 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.526 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.528 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_stat 2026-03-10T13:39:18.534 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.536 INFO:tasks.workunit.client.0.vm05.stdout:test api_stat on pid 91369 2026-03-10T13:39:18.536 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_stat 2026-03-10T13:39:18.537 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91369 2026-03-10T13:39:18.537 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_stat on pid 91369' 2026-03-10T13:39:18.537 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91369 2026-03-10T13:39:18.537 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.537 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.540 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.541 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.542 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_stat_pp 2026-03-10T13:39:18.543 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_stat_pp' 2026-03-10T13:39:18.544 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_stat_pp 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.545 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91232/exe ']' 2026-03-10T13:39:18.546 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.547 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.548 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stdout:test api_stat_pp on pid 91397 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_stat_pp 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91397 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_stat_pp on pid 91397' 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91397 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.550 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-10T13:39:18.551 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.551 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.551 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.551 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.552 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.553 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91232/exe 2026-03-10T13:39:18.554 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.557 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.558 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_watch_notify 2026-03-10T13:39:18.558 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_watch_notify' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.561 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.562 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.563 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-10T13:39:18.564 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.565 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_pool: /' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91344/exe ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.567 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.568 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.574 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_watch_notify 2026-03-10T13:39:18.574 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.577 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stdout:test api_watch_notify on pid 91461 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_watch_notify 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91461 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_watch_notify on pid 91461' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91461 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.585 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.587 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.589 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.590 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91344/exe 2026-03-10T13:39:18.591 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.591 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.591 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.591 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.591 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.592 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.593 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_watch_notify_pp 2026-03-10T13:39:18.594 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_watch_notify_pp' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.596 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91397/exe ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.597 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91397/exe 2026-03-10T13:39:18.598 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-10T13:39:18.599 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.600 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.601 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.602 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-10T13:39:18.602 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-10T13:39:18.603 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-10T13:39:18.608 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.608 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_watch_notify_pp 2026-03-10T13:39:18.609 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.613 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.614 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.614 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.614 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.614 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.614 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.618 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stdout:test api_watch_notify_pp on pid 91504 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_watch_notify_pp 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91504 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_watch_notify_pp on pid 91504' 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91504 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.619 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.620 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.620 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.620 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.620 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.620 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.621 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.623 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.623 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_cmd 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.624 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91461/exe ']' 2026-03-10T13:39:18.625 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_cmd' 2026-03-10T13:39:18.625 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.626 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91369/exe ']' 2026-03-10T13:39:18.632 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91461/exe 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.633 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.635 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.636 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.636 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.636 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-10T13:39:18.638 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-10T13:39:18.638 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-10T13:39:18.639 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91369/exe 2026-03-10T13:39:18.639 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.640 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_cmd 2026-03-10T13:39:18.641 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.643 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_cmd 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stdout:test api_cmd on pid 91547 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91547 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_cmd on pid 91547' 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91547 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.644 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.647 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_stat: /' 2026-03-10T13:39:18.648 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-10T13:39:18.648 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_cmd_pp 2026-03-10T13:39:18.648 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_cmd_pp' 2026-03-10T13:39:18.648 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-10T13:39:18.649 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-10T13:39:18.649 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.649 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.650 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.650 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.650 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.650 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.651 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.655 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.658 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.660 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.660 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_cmd_pp 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stdout:test api_cmd_pp on pid 91580 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_cmd_pp 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91580 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_cmd_pp on pid 91580' 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91580 2026-03-10T13:39:18.664 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.665 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.669 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.670 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-10T13:39:18.670 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_service 2026-03-10T13:39:18.670 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_service' 2026-03-10T13:39:18.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.676 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.678 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.682 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.683 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.684 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.686 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_service 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.688 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.689 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.689 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stdout:test api_service on pid 91631 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_service 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91631 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_service on pid 91631' 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91631 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.692 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.693 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.693 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.693 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.694 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.695 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.696 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.697 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91504/exe ']' 2026-03-10T13:39:18.697 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-10T13:39:18.697 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.697 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.698 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_service_pp 2026-03-10T13:39:18.699 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_service_pp' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.702 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91504/exe 2026-03-10T13:39:18.703 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.704 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.705 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_service_pp 2026-03-10T13:39:18.706 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.707 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.707 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.707 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.707 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.707 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.708 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.709 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91580/exe ']' 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stdout:test api_service_pp on pid 91668 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_service_pp 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91668 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_service_pp on pid 91668' 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91668 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-10T13:39:18.710 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.711 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91580/exe 2026-03-10T13:39:18.711 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-10T13:39:18.711 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91547/exe ']' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.712 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.713 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.713 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.713 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-10T13:39:18.715 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-10T13:39:18.715 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-10T13:39:18.716 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-10T13:39:18.716 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_c_write_operations 2026-03-10T13:39:18.716 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_c_write_operations' 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.717 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.719 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91547/exe 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.721 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.722 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.723 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.724 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.725 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_cmd: /' 2026-03-10T13:39:18.726 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_c_write_operations 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stdout:test api_c_write_operations on pid 91686 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_c_write_operations 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91686 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_c_write_operations on pid 91686' 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91686 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.728 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.729 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-10T13:39:18.731 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s api_c_read_operations 2026-03-10T13:39:18.732 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' api_c_read_operations' 2026-03-10T13:39:18.732 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.733 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.734 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.734 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.734 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.735 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.736 INFO:tasks.workunit.client.0.vm05.stderr:++ echo api_c_read_operations 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.739 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stdout:test api_c_read_operations on pid 91711 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=api_c_read_operations 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91711 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test api_c_read_operations on pid 91711' 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91711 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.740 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.745 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.746 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.748 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-10T13:39:18.748 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s list_parallel 2026-03-10T13:39:18.749 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' list_parallel' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.750 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.753 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.757 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.758 INFO:tasks.workunit.client.0.vm05.stderr:++ echo list_parallel 2026-03-10T13:39:18.758 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.759 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stdout:test list_parallel on pid 91736 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=list_parallel 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91736 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test list_parallel on pid 91736' 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91736 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.760 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91631/exe ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.762 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.765 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.766 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-10T13:39:18.767 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s open_pools_parallel 2026-03-10T13:39:18.767 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' open_pools_parallel' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.769 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.770 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.771 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91631/exe 2026-03-10T13:39:18.771 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.772 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.773 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.774 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.775 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.775 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.775 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.776 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.776 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.776 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-10T13:39:18.776 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_service.log 2026-03-10T13:39:18.780 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_service: /' 2026-03-10T13:39:18.783 INFO:tasks.workunit.client.0.vm05.stderr:++ echo open_pools_parallel 2026-03-10T13:39:18.783 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stdout:test open_pools_parallel on pid 91776 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=open_pools_parallel 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91776 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test open_pools_parallel on pid 91776' 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91776 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T13:39:18.785 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.788 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.789 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.790 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.794 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.794 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.794 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.794 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.794 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.795 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s delete_pools_parallel 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' delete_pools_parallel' 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.796 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.798 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.799 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.800 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.800 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.800 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.800 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.801 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91668/exe ']' 2026-03-10T13:39:18.802 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.803 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.804 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.804 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.805 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.806 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91686/exe ']' 2026-03-10T13:39:18.807 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91668/exe 2026-03-10T13:39:18.810 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.811 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.812 INFO:tasks.workunit.client.0.vm05.stderr:++ echo delete_pools_parallel 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.814 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.816 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-10T13:39:18.816 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-10T13:39:18.816 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stdout:test delete_pools_parallel on pid 91842 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=delete_pools_parallel 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91842 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test delete_pools_parallel on pid 91842' 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91842 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:18.828 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.830 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.838 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91686/exe 2026-03-10T13:39:18.839 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-10T13:39:18.839 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s cls 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.841 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.842 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.842 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.842 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.843 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' cls' 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.844 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.845 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.847 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.847 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.847 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.847 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.847 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.848 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-10T13:39:18.849 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-10T13:39:18.849 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-10T13:39:18.851 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.852 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.852 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.852 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.852 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.852 INFO:tasks.workunit.client.0.vm05.stderr:++ echo cls 2026-03-10T13:39:18.854 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.855 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.856 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.857 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91736/exe ']' 2026-03-10T13:39:18.858 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stdout:test cls on pid 91888 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=cls 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91888 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test cls on pid 91888' 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91888 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:18.859 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.861 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91711/exe ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.862 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.863 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.867 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.868 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.868 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.868 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91736/exe 2026-03-10T13:39:18.868 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s cmd 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.869 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91711/exe 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-10T13:39:18.870 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.871 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' cmd' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.872 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.873 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.873 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.873 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.873 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-10T13:39:18.875 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-10T13:39:18.875 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ list_parallel: /' 2026-03-10T13:39:18.879 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.880 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.882 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-10T13:39:18.883 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-10T13:39:18.883 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-10T13:39:18.888 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.889 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.891 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:18.897 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.898 INFO:tasks.workunit.client.0.vm05.stderr:++ echo cmd 2026-03-10T13:39:18.899 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91776/exe ']' 2026-03-10T13:39:18.912 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stdout:test cmd on pid 91937 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=cmd 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91937 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test cmd on pid 91937' 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91937 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:18.915 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.935 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s handler_error 2026-03-10T13:39:18.936 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-10T13:39:18.936 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' handler_error' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:18.938 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:18.939 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:18.940 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:18.940 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:18.940 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.940 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:18.943 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91776/exe 2026-03-10T13:39:18.948 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:18.952 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:18.962 INFO:tasks.workunit.client.0.vm05.stderr:++ echo handler_error 2026-03-10T13:39:18.962 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:18.967 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:18.970 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:18.971 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.974 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-10T13:39:18.974 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stdout:test handler_error on pid 91966 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=handler_error 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91966 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test handler_error on pid 91966' 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=91966 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:18.980 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:18.981 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91842/exe ']' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:18.990 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:18.997 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-10T13:39:18.999 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s io 2026-03-10T13:39:18.999 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' io' 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.008 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.011 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.012 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91842/exe 2026-03-10T13:39:19.013 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.014 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.017 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.017 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.017 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.018 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.018 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.018 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.018 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.018 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.021 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.024 INFO:tasks.workunit.client.0.vm05.stderr:++ echo io 2026-03-10T13:39:19.025 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-10T13:39:19.026 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-10T13:39:19.027 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.036 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stdout:test io on pid 92015 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=io 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92015 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test io on pid 92015' 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92015 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.037 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.042 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.045 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.045 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.046 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.047 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s ec_io 2026-03-10T13:39:19.053 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' ec_io' 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.060 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.061 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.066 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.067 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.067 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.068 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.068 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.068 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.068 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.068 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.079 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.082 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.082 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.082 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.082 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.082 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.084 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.087 INFO:tasks.workunit.client.0.vm05.stderr:++ echo ec_io 2026-03-10T13:39:19.088 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.095 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.099 INFO:tasks.workunit.client.0.vm05.stdout:test ec_io on pid 92066 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=ec_io 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92066 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test ec_io on pid 92066' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92066 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.100 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91888/exe ']' 2026-03-10T13:39:19.103 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.103 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.103 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.103 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.103 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.105 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.106 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.108 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s list 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' list' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.109 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.110 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.113 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91888/exe 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.114 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91937/exe ']' 2026-03-10T13:39:19.117 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.118 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.119 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.119 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.119 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.119 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.119 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.120 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.123 INFO:tasks.workunit.client.0.vm05.stderr:++ echo list 2026-03-10T13:39:19.123 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.126 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.127 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.129 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_cls 2026-03-10T13:39:19.129 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_cls.log 2026-03-10T13:39:19.130 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ cls: /' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.131 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.132 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.137 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91937/exe 2026-03-10T13:39:19.139 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stdout:test list on pid 92125 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=list 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92125 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test list on pid 92125' 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92125 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.140 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.141 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.142 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.145 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/91966/exe ']' 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.146 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/91966/exe 2026-03-10T13:39:19.147 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.148 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_handler_error 2026-03-10T13:39:19.149 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s ec_list 2026-03-10T13:39:19.149 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' ec_list' 2026-03-10T13:39:19.149 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_cmd 2026-03-10T13:39:19.150 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-10T13:39:19.150 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ cmd: /' 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.153 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.154 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.160 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.160 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.160 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.160 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.160 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.162 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ handler_error: /' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.163 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.164 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.164 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.164 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.164 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.164 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.166 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.170 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ echo ec_list 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.172 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.173 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 46 B/s, 1 objects/s recovering 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2952521137' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.174 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92125/exe ']' 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.175 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92125/exe 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92015/exe ']' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.176 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stdout:test ec_list on pid 92182 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=ec_list 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92015/exe 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92182 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test ec_list on pid 92182' 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92182 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.177 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.178 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.179 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.179 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.179 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.180 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.184 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_io 2026-03-10T13:39:19.187 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s misc 2026-03-10T13:39:19.188 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' misc' 2026-03-10T13:39:19.188 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_io.log 2026-03-10T13:39:19.188 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ io: /' 2026-03-10T13:39:19.197 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.197 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.197 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.197 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.198 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.198 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.198 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.198 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.202 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_list.log 2026-03-10T13:39:19.204 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ list: /' 2026-03-10T13:39:19.206 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_list 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.207 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.210 INFO:tasks.workunit.client.0.vm05.stderr:++ echo misc 2026-03-10T13:39:19.210 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.217 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.218 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stdout:test misc on pid 92225 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=misc 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92225 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test misc on pid 92225' 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92225 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.220 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.224 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s pool 2026-03-10T13:39:19.225 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-10T13:39:19.225 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' pool' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.226 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.227 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.228 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.228 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.228 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.228 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.228 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.230 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.231 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92066/exe ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ echo pool 2026-03-10T13:39:19.232 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stdout:test pool on pid 92258 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=pool 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92258 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test pool on pid 92258' 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92258 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.237 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.239 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.242 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-10T13:39:19.242 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s read_operations 2026-03-10T13:39:19.243 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' read_operations' 2026-03-10T13:39:19.244 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92066/exe 2026-03-10T13:39:19.245 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.246 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.248 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_ec_io 2026-03-10T13:39:19.249 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.249 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.249 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.250 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.250 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.250 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.253 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.256 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ ec_io: /' 2026-03-10T13:39:19.256 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-10T13:39:19.256 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.257 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.257 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.257 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.258 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.258 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.258 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.258 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.259 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.262 INFO:tasks.workunit.client.0.vm05.stderr:++ echo read_operations 2026-03-10T13:39:19.263 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.264 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92182/exe ']' 2026-03-10T13:39:19.266 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.272 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.272 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.272 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.272 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=read_operations 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stdout:test read_operations on pid 92310 2026-03-10T13:39:19.273 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92310 2026-03-10T13:39:19.274 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test read_operations on pid 92310' 2026-03-10T13:39:19.274 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92310 2026-03-10T13:39:19.274 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.274 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.276 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92182/exe 2026-03-10T13:39:19.279 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s snapshots 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' snapshots' 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ ec_list: /' 2026-03-10T13:39:19.280 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_ec_list 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.281 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.282 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.287 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.288 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.288 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.288 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.289 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.289 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.289 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ echo snapshots 2026-03-10T13:39:19.291 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.295 INFO:tasks.workunit.client.0.vm05.stdout:test snapshots on pid 92341 2026-03-10T13:39:19.295 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=snapshots 2026-03-10T13:39:19.296 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92341 2026-03-10T13:39:19.296 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test snapshots on pid 92341' 2026-03-10T13:39:19.296 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92341 2026-03-10T13:39:19.296 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.296 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.297 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.298 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s watch_notify 2026-03-10T13:39:19.299 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' watch_notify' 2026-03-10T13:39:19.299 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.301 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.302 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.303 INFO:tasks.workunit.client.0.vm05.stderr:++ echo watch_notify 2026-03-10T13:39:19.304 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.304 INFO:tasks.workunit.client.0.vm05.stdout:test watch_notify on pid 92359 2026-03-10T13:39:19.304 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=watch_notify 2026-03-10T13:39:19.304 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92359 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test watch_notify on pid 92359' 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92359 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.305 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.306 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92225/exe ']' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:++ printf %25s write_operations 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+ r=' write_operations' 2026-03-10T13:39:19.307 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-10T13:39:19.308 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.308 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.309 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.309 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.309 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.309 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.309 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.310 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.311 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.311 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.311 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.312 INFO:tasks.workunit.client.0.vm05.stderr:++ awk '{print $1}' 2026-03-10T13:39:19.314 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92225/exe 2026-03-10T13:39:19.314 INFO:tasks.workunit.client.0.vm05.stderr:++ echo write_operations 2026-03-10T13:39:19.315 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.315 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.316 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stdout:test write_operations on pid 92384 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ ff=write_operations 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92384 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ echo 'test write_operations on pid 92384' 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ pids[$f]=92384 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ ret=0 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T13:39:19.317 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91197 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91197 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.318 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.319 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_misc.log 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.320 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92258/exe ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.321 INFO:tasks.workunit.client.0.vm05.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-10T13:39:19.322 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ misc: /' 2026-03-10T13:39:19.322 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92258/exe 2026-03-10T13:39:19.322 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_misc 2026-03-10T13:39:19.323 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.323 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -f /etc/bashrc ']' 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.324 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.325 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_pool 2026-03-10T13:39:19.327 INFO:tasks.workunit.client.0.vm05.stderr:+ . /etc/bashrc 2026-03-10T13:39:19.327 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -z '' ']' 2026-03-10T13:39:19.327 INFO:tasks.workunit.client.0.vm05.stderr:++ BASHRCSOURCED=Y 2026-03-10T13:39:19.327 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.327 INFO:tasks.workunit.client.0.vm05.stderr:++ shopt -q login_shell 2026-03-10T13:39:19.329 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.329 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.329 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.329 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.329 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.330 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_pool.log 2026-03-10T13:39:19.330 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ pool: /' 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 46 B/s, 1 objects/s recovering 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2952521137' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s; 46 B/s, 1 objects/s recovering 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2952521137' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.333 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.334 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.335 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.335 INFO:tasks.workunit.client.0.vm05.stderr:+++ umask 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' 0022 -eq 0 ']' 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ SHELL=/bin/bash 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorgrep.sh ']' 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorgrep.sh 2026-03-10T13:39:19.336 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.342 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92310/exe ']' 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.343 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.347 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.348 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92341/exe ']' 2026-03-10T13:39:19.349 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92310/exe 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'grep=grep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'egrep=egrep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'fgrep=fgrep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorls.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorls.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' '!' -t 0 ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ return 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorsysstat.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorsysstat.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ export S_COLORS=auto 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ S_COLORS=auto 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorxzgrep.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorxzgrep.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.352 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92341/exe 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.353 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.354 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ snapshots: /' 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.356 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_read_operations 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92359/exe ']' 2026-03-10T13:39:19.357 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ read_operations: /' 2026-03-10T13:39:19.358 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzgrep=xzgrep --color=auto' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzegrep=xzegrep --color=auto' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'xzfgrep=xzfgrep --color=auto' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/colorzgrep.sh ']' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/colorzgrep.sh 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /usr/libexec/grepconf.sh ']' 2026-03-10T13:39:19.362 INFO:tasks.workunit.client.0.vm05.stderr:+++ /usr/libexec/grepconf.sh -c 2026-03-10T13:39:19.363 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_snapshots 2026-03-10T13:39:19.363 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-10T13:39:19.367 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92359/exe 2026-03-10T13:39:19.369 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.370 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.370 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.370 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.370 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.371 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.372 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.375 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zgrep=zgrep --color=auto' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zfgrep=zfgrep --color=auto' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ alias 'zegrep=zegrep --color=auto' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/debuginfod.sh ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/debuginfod.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ prefix=/usr 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z 'https://debuginfod.centos.org/ ' ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z /etc/keys/ima: ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset prefix 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/gawk.sh ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/gawk.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/lang.sh ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/lang.sh 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ watch_notify: /' 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_watch_notify 2026-03-10T13:39:19.376 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-10T13:39:19.384 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/locale 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG_backup=en_US.UTF-8 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /etc/locale.conf ']' 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -x /usr/bin/sed ']' 2026-03-10T13:39:19.388 INFO:tasks.workunit.client.0.vm05.stderr:++++ /usr/bin/sed -r -e 's/^[[:blank:]]*([[:upper:]_]+)=([[:print:][:digit:]\._-]+|"[[:print:][:digit:]\._-]+")/export \1=\2/;t;d' /etc/locale.conf 2026-03-10T13:39:19.389 INFO:tasks.workunit.client.0.vm05.stderr:+++ eval export LANG=en_US.UTF-8 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++++ export LANG=en_US.UTF-8 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++++ LANG=en_US.UTF-8 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ for config in /etc/locale.conf "${HOME}/.i18n" 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -f /home/ubuntu/.i18n ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ LANG=en_US.UTF-8 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ unset LANG_backup config 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n '' ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -n en_US.UTF-8 ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' dumb = linux ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/less.sh ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/less.sh 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -z '||/usr/bin/lesspipe.sh %s' ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ for i in /etc/profile.d/*.sh 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' -r /etc/profile.d/which2.sh ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ '[' '' ']' 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:++ . /etc/profile.d/which2.sh 2026-03-10T13:39:19.390 INFO:tasks.workunit.client.0.vm05.stderr:+++ '[' -r /proc/92384/exe ']' 2026-03-10T13:39:19.400 INFO:tasks.workunit.client.0.vm05.stderr:+++++ readlink /proc/92384/exe 2026-03-10T13:39:19.401 INFO:tasks.workunit.client.0.vm05.stderr:++++ basename /usr/bin/bash 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ SHELLNAME=bash 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ case "$SHELLNAME" in 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_declare='declare -f' 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ which_opt=-f 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ export which_declare 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+++ export -f which 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:++ unset i 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:++ unset -f pathmunge 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+ [[ /home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin =~ /home/ubuntu/\.local/bin:/home/ubuntu/bin: ]] 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+ export PATH 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' -d /home/ubuntu/.bashrc.d ']' 2026-03-10T13:39:19.402 INFO:tasks.workunit.client.0.vm05.stderr:+ unset rc 2026-03-10T13:39:19.405 INFO:tasks.workunit.client.0.vm05.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-10T13:39:19.405 INFO:tasks.workunit.client.0.vm05.stderr:+ sed 's/^/ write_operations: /' 2026-03-10T13:39:19.405 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph_test_neorados_write_operations 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3648177896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3859578219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1391050673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2864183472' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4068638736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3991768196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24722 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24700 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24653 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24745 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24647 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3648177896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3859578219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1391050673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2864183472' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4068638736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3991768196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24722 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24700 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24653 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24745 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.24647 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.333 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: osdmap e62: 8 total, 8 up, 8 in 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3648177896' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3859578219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1391050673' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2864183472' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4068638736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3991768196' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24722 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24700 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24653 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24745 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24710 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.24647 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] Global test environment set-up. 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (1 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (0 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (0 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (14 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (18 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (9 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (1 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (1 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (0 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (12 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (4 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (10 ms) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] 12 tests from AsioRados (70 ms total) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [----------] Global test environment tear-down 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (2009 ms total) 2026-03-10T13:39:20.447 INFO:tasks.workunit.client.0.vm05.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [==========] Running 11 tests from 3 test suites. 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] Global test environment set-up. 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjects (348 ms) 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (44 ms) 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo2 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo3 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo4 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo5 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo6,foo7 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo7 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo6 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo4 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo5 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo7 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo6 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo1 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo1 2026-03-10T13:39:20.682 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo2 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo3 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (81 ms) 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 1 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 10 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 13 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 7 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 14 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 0 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 15 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 11 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 5 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 8 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 6 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 3 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 4 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 12 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 9 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2 0 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (69 ms) 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: x cursor=MIN 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=1 cursor=13:02547ec2:::1:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=10 cursor=13:52ea6a34:::10:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=13 cursor=13:566253c9:::13:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=7 cursor=13:5c6b0b28:::7:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=14 cursor=13:62a1935d:::14:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=0 cursor=13:6cac518f:::0:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=15 cursor=13:863748b0:::15:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=11 cursor=13:89d3ae78:::11:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=5 cursor=13:b29083e3:::5:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=8 cursor=13:bd63b0f1:::8:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=6 cursor=13:c4fdafeb:::6:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=3 cursor=13:cfc208b3:::3:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=4 cursor=13:d83876eb:::4:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=12 cursor=13:de5d7c5f:::12:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=9 cursor=13:e960b815:::9:head 2026-03-10T13:39:20.683 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > oid=2 cursor=13:f905c69b:::2:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=1 cursor=13:02547ec2:::1:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=10 cursor=13:52ea6a34:::10:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=13 cursor=13:566253c9:::13:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=7 cursor=13:5c6b0b28:::7:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-10T13:39:20.684 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=14 cursor=13:62a1935d:::14:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=0 cursor=13:6cac518f:::0:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=15 cursor=13:863748b0:::15:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=11 cursor=13:89d3ae78:::11:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=5 cursor=13:b29083e3:::5:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=8 cursor=13:bd63b0f1:::8:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=6 cursor=13:c4fdafeb:::6:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-10T13:39:20.721 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=3 cursor=13:cfc208b3:::3:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=4 cursor=13:d83876eb:::4:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=12 cursor=13:de5d7c5f:::12:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=9 cursor=13:e960b815:::9:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : oid=2 cursor=13:f905c69b:::2:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:e960b815:::9:head expected=13:e960b815:::9:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=9 expected=9 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:de5d7c5f:::12:head expected=13:de5d7c5f:::12:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=12 expected=12 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:d83876eb:::4:head expected=13:d83876eb:::4:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=4 expected=4 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:cfc208b3:::3:head expected=13:cfc208b3:::3:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=3 expected=3 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:c4fdafeb:::6:head expected=13:c4fdafeb:::6:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=6 expected=6 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:bd63b0f1:::8:head expected=13:bd63b0f1:::8:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=8 expected=8 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:b29083e3:::5:head expected=13:b29083e3:::5:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=5 expected=5 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:89d3ae78:::11:head expected=13:89d3ae78:::11:head 2026-03-10T13:39:20.722 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=11 e api_cmd_pp: Running main() from gmock_main.cc 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (27 ms) 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (35 ms) 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (2268 ms) 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (2330 ms total) 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (2330 ms total) 2026-03-10T13:39:21.066 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: pgmap v42: 676 pgs: 544 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 43 B/s, 1 objects/s recovering 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: Health check failed: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24722 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24700 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24653 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24745 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24647 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3953571869' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3559505800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24799 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/663450739' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24805 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.24823 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: pgmap v42: 676 pgs: 544 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 43 B/s, 1 objects/s recovering 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: Health check failed: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24722 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24700 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24653 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24745 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24647 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3953571869' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3559505800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24799 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/663450739' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24805 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.24823 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: pgmap v42: 676 pgs: 544 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s; 43 B/s, 1 objects/s recovering 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: Health check failed: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1422259403' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm05-91018-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1147032706' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm05-91043-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/763033691' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm05-91213-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/809522417' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm05-91182-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2693885606' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1616203079' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm05-91544-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1155449836' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91673-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2161082623' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm05-91276-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24722 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm05-91476-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24700 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm05-91333-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24653 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm05-91051-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24745 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm05-91536-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm05-91156-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24710 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm05-91492-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24647 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm05-91079-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm05-92281-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm05-92320-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: osdmap e63: 8 total, 8 up, 8 in 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3953571869' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3559505800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24799 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/663450739' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24805 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.24823 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: Running main() from gmock_main.cc 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (472 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (31 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (7 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (13 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (5 ms) 2026-03-10T13:39:22.012 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (12 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (11 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (11 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (5 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (8 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (6 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (6 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (20 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (14 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (17 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (13 ms) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (651 ms total) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (3069 ms total) 2026-03-10T13:39:22.013 INFO:tasks.workunit.client.0.vm05.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (72 ms) 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (41 ms) 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (3124 ms) 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:20.959853+0000 mon.a [INF] from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:20.959992+0000 mon.a [INF] from='client.24799 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:20.960105+0000 mon.a [INF] from='client.24805 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:20.960187+0000 mon.a [INF] from='client.24823 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.072552+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.093071+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.026 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.139826+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.141201+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.145401+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.158311+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.169120+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.216603+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.216733+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.216801+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.216914+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.219727+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.220162+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.220669+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.221079+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.227384+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.227451+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.227524+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.228168+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.229609+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.230039+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.234763+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.235075+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.027 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.236561+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.037 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.236947+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: d handler_error: Running main() from gmock_main.cc 2026-03-10T13:39:22.037 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-10T13:39:22.037 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] Global test environment set-up. 2026-03-10T13:39:22.037 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (2873 ms) 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] 1 test from neocls_handler_error (2873 ms total) 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [----------] Global test environment tear-down 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [==========] 1 test from 1 test suite ran. (2873 ms total) 2026-03-10T13:39:22.038 INFO:tasks.workunit.client.0.vm05.stdout: handler_error: [ PASSED ] 1 test. 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: Running main() from gmock_main.cc 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] Global test environment set-up. 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ OK ] NeoRadosCls.DNE (2841 ms) 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] 1 test from NeoRadosCls (2841 ms total) 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [----------] Global test environment tear-down 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [==========] 1 test from 1 test suite ran. (2841 ms total) 2026-03-10T13:39:22.041 INFO:tasks.workunit.client.0.vm05.stdout: cls: [ PASSED ] 1 test. 2026-03-10T13:39:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24799 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24805 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24823 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: pgmap v45: 908 pgs: 776 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24799 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24805 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24823 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: pgmap v45: 908 pgs: 776 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm05-91659-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24799 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91682-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24805 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24823 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm05-91906-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: osdmap e64: 8 total, 8 up, 8 in 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: pgmap v45: 908 pgs: 776 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:22.658 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: creating pool ceph_test_rados_list_parallel.vm05-91902 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: created object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: created object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: created object 49... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: finishing. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_1_[91941]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: listing objects. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: listed object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: listed object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: saw 50 objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_2_[91942]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: creating pool ceph_test_rados_list_parallel.vm05-91902 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: created object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: created object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: created object 49... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: finishing. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_3_[92732]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: listing objects. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: listed object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: listed object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: saw 46 objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_4_[92733]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: removed 25 objects... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: removed half of the objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: removed 50 objects... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: removed 50 objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_5_[92734]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: creating pool ceph_test_rados_list_parallel.vm05-91902 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: created object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: created object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: created object 49... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: finishing. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_6_[92780]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: listing objects. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: listed object 0... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: listed object 25... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: listed object 50... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: saw 53 objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_7_[92781]: shutting down. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: starting. 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: added 25 objects... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: added half of the objects 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: added 50 objects... 2026-03-10T13:39:22.659 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: added 50 objects 2026-03-10T13:39:22.660 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_8_[92782]: shutting down. 2026-03-10T13:39:22.660 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.660 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:22.660 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.135 INFO:tasks.workunit.client.0.vm05.stdout: ispatch 2026-03-10T13:39:23.135 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.977665+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:23.135 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.977766+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:23.135 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.978003+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:23.135 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.978816+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.978861+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.978885+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.978997+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.986048+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.986084+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.989130+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.989234+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.990401+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3064015293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.136 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:21.994351+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/867975640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: api_c list_parallel: process_9_[92991]: starting. 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: creating pool ceph_test_rados_list_parallel.vm05-91902 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: created object 0... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: created object 25... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: created object 49... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: finishing. 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_9_[92991]: shutting down. 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: starting. 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: listing objects. 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: listed object 0... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: listed object 25... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: listed object 50... 2026-03-10T13:39:23.284 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: listed object 75... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: saw 99 objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_10_[92992]: shutting down. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: starting. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: removed 25 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: removed half of the objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: removed 50 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: removed 50 objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_13_[92995]: shutting down. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: starting. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: added 25 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: added half of the objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: added 50 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: added 50 objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_12_[92994]: shutting down. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: starting. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: added 25 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: added half of the objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: added 50 objects... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: added 50 objects 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_11_[92993]: shutting down. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: starting. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: creating pool ceph_test_rados_list_parallel.vm05-91902 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: created object 0... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: created object 25... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: created object 49... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: finishing. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_14_[93071]: shutting down. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: starting. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listing objects. 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 0... 2026-03-10T13:39:23.285 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 25... 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3064015293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/867975640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: onexx 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24946 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3064015293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/867975640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: onexx 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24946 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm05-92281-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm05-92320-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm05-91544-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm05-91492-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm05-91079-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3064015293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: osdmap e65: 8 total, 8 up, 8 in 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/867975640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: onexx 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24946 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 50... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 75... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 100... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: listed object 125... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: saw 150 objects 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_15_[93072]: shutting down. 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: starting. 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: added 25 objects... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: added half of the objects 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: added 50 objects... 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: added 50 objects 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: process_16_[93073]: shutting down. 2026-03-10T13:39:24.351 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.352 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.352 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.352 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.352 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******************************* 2026-03-10T13:39:24.352 INFO:tasks.workunit.client.0.vm05.stdout: list_parallel: ******* SUCCESS ********** 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.24946 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2032474750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25039 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: pgmap v48: 844 pgs: 712 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: twoxx 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2832419191' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.24946 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2032474750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25039 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: pgmap v48: 844 pgs: 712 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: twoxx 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2832419191' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.24946 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.24958 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: osdmap e66: 8 total, 8 up, 8 in 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2032474750' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25039 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: pgmap v48: 844 pgs: 712 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: twoxx 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:24.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2832419191' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: starting. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: creating pool ceph_test_rados_delete_pools_parallel.vm05-91983 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: created object 0... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: created object 25... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: created object 49... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: finishing. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_1_[92072]: shutting down. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[92074]: starting. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[92074]: deleting pool ceph_test_rados_delete_pools_parallel.vm05-91983 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_2_[92074]: shutting down. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: starting. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: creating pool ceph_test_rados_delete_pools_parallel.vm05-91983 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: created object 0... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: created object 25... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: created object 49... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: finishing. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_3_[92877]: shutting down. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: starting. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: listing objects. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: listed object 0... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: listed object 25... 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: saw 50 objects 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_5_[92879]: shutting down. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[92878]: starting. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[92878]: deleting pool ceph_test_rados_delete_pools_parallel.vm05-91983 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: process_4_[92878]: shutting down. 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******************************* 2026-03-10T13:39:25.780 INFO:tasks.workunit.client.0.vm05.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: starting. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: creating pool ceph_test_rados_open_pools_parallel.vm05-91948 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: created object 0... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: created object 25... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: created object 49... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: finishing. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_1_[92013]: shutting down. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[92024]: starting. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[92024]: rados_pool_create. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[92024]: rados_ioctx_create. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_2_[92024]: shutting down. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: starting. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: creating pool ceph_test_rados_open_pools_parallel.vm05-91948 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: created object 0... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: created object 25... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: created object 49... 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: finishing. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_3_[92875]: shutting down. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[92876]: starting. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[92876]: rados_pool_create. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[92876]: rados_ioctx_create. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: process_4_[92876]: shutting down. 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******************************* 2026-03-10T13:39:25.806 INFO:tasks.workunit.client.0.vm05.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-10T13:39:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:26.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25039 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25039 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:26.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25039 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: osdmap e67: 8 total, 8 up, 8 in 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:26.381 INFO:tasks.workunit.client.0.vm05.stdout:md: got: 2026-03-10T13:39:22.025490+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.028166+0000 client.admin [INF] onexx 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.030513+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.031267+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.062905+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.063174+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.063298+0000 mon.a [INF] from='client.24946 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm05-91018-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.063383+0000 mon.a [INF] from='client.24958 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.063543+0000 mon.a [INF] from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.103856+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.130218+0000 mon.a [INF] from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:22.247265+0000 mon.a [INF] from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.294066+0000 mon.a [INF] from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295368+0000 mon.a [INF] from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm05-91544-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295397+0000 mon.a [INF] from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm05-91079-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295420+0000 mon.a [INF] from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm05-91492-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295441+0000 mon.a [INF] from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295460+0000 mon.a [INF] from='client.25039 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295478+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295496+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295532+0000 mon.a [INF] from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm05-91182-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.295551+0000 mon.a [INF] from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm05-91213-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.303333+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/1723243852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.305909+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.342947+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.343325+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.347079+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.348118+0000 mon.a [INF] from='client.24752 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.348210+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:26.382 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.348324+0000 mon.a [INF] from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.366086+0000 mon.a [INF] from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.366431+0000 mon.a [INF] from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.367885+0000 mon.a [INF] from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:24.426945+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.684436+0000 mon.a [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.689243+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.689298+0000 mon.a [INF] from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.689323+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.689356+0000 mon.a [INF] from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.689381+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.727929+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.739108+0000 mon.c [INF] from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.774814+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/51560303' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.775627+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/2847671089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.803426+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:27.911 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.803570+0000 mon.a [INF] from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.803685+0000 mon.a [INF] from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm0 api_io_pp: Running main() from gmock_main.cc 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: seed 91043 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-10T13:39:28.024 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (381 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (15 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (8 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (17 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (6 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (18 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (3 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (6 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (4 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (6 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (18 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (10 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (3 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (5 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (10 ms) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (535 ms total) 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-10T13:39:28.025 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (2054 ms) 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (1523 ms) 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (58 ms) 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-10T13:39:28.026 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (32 ms) 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: pgmap v50: 900 pgs: 224 creating+peering, 1 active, 13 creating+activating, 192 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 310 op/s 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/51560303' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2847671089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.25222 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.25225 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2132186857' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: threexx 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: pgmap v50: 900 pgs: 224 creating+peering, 1 active, 13 creating+activating, 192 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 310 op/s 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/51560303' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2847671089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.25222 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.25225 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2132186857' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: threexx 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:27 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: pgmap v50: 900 pgs: 224 creating+peering, 1 active, 13 creating+activating, 192 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 310 op/s 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1111934311' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm05-92281-1"}]': finished 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.24752 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm05-91659-1"}]': finished 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm05-91051-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2864513166' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/51560303' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2847671089' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: osdmap e68: 8 total, 8 up, 8 in 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.25222 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.25225 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2132186857' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: threexx 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:27 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN cmd: Running main() from gmock_main.cc 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] Global test environment set-up. 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (1792 ms) 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (2068 ms) 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (5205 ms) 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] 3 tests from NeoRadosCmd (9065 ms total) 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [----------] Global test environment tear-down 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [==========] 3 tests from 1 test suite ran. (9066 ms total) 2026-03-10T13:39:28.284 INFO:tasks.workunit.client.0.vm05.stdout: cmd: [ PASSED ] 3 tests. 2026-03-10T13:39:28.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: pgmap v52: 804 pgs: 32 creating+peering, 1 active, 13 creating+activating, 288 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 251 op/s 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25222 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25225 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1426279600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: pgmap v52: 804 pgs: 32 creating+peering, 1 active, 13 creating+activating, 288 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 251 op/s 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25222 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25225 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1426279600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25267 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: fourxx 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25267 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25267 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: fourxx 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]': finished 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25267 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:29.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:28 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: pgmap v52: 804 pgs: 32 creating+peering, 1 active, 13 creating+activating, 288 unknown, 470 active+clean; 459 KiB data, 340 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 251 op/s 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm05-91182-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm05-91213-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2233289382' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm05-92320-1"}]': finished 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-4", "overlaypool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.24962 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm05-91051-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25222 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25225 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: osdmap e69: 8 total, 8 up, 8 in 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1426279600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25267 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: fourxx 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-4-cache", "mode": "writeback"}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25267 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm05-92320-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: osdmap e70: 8 total, 8 up, 8 in 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:28 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.4 deep-scrub starts 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.4 deep-scrub ok 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.6 deep-scrub starts 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.6 deep-scrub ok 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.1 deep-scrub starts 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.1 deep-scrub ok 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.9 deep-scrub starts 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: 16.9 deep-scrub ok 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3210260850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3060166736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25297 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.25309 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:29 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.4 deep-scrub starts 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.4 deep-scrub ok 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.6 deep-scrub starts 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.6 deep-scrub ok 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.1 deep-scrub starts 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.1 deep-scrub ok 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.9 deep-scrub starts 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: 16.9 deep-scrub ok 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3210260850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3060166736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25297 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.25309 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.4 deep-scrub starts 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.4 deep-scrub ok 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25030 ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.6 deep-scrub starts 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.6 deep-scrub ok 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.1 deep-scrub starts 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.1 deep-scrub ok 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.9 deep-scrub starts 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: 16.9 deep-scrub ok 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm05-92281-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91659-6", "pg_num": 4}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-4"}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm05-91043-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: osdmap e71: 8 total, 8 up, 8 in 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3947919250' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/742678248' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3210260850' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3060166736' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25297 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.25309 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:29 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:30.334 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: Running main() from gmock_main.cc 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] Global test environment set-up. 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-10T13:39:30.453 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.Stat (369 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (10 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.StatNS (38 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (1 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (6 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 5 tests from LibRadosStat (424 ms total) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (1876 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (1779 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (0 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (1 ms) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (3656 ms total) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [----------] Global test environment tear-down 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (11784 ms total) 2026-03-10T13:39:30.454 INFO:tasks.workunit.client.0.vm05.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-10T13:39:30.458 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: Running main() from gmock_main.cc 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: seed 91492 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (206 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (89 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (1 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (11 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (31 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (338 ms total) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (2091 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (0 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (1080 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (489 ms) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (3660 ms total) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (11840 ms total) 2026-03-10T13:39:30.467 INFO:tasks.workunit.client.0.vm05.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-10T13:39:30.468 INFO:tasks.workunit.client.0.vm05.stdout: list: Running main() from gmock_main.cc 2026-03-10T13:39:30.468 INFO:tasks.workunit.client.0.vm05.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-10T13:39:30.468 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] Global test environment set-up. 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] 3 tests from NeoradosList 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjects (2852 ms) 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3702 ms) 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ OK ] NeoradosList.ListObjectsMany (4673 ms) 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] 3 tests from NeoradosList (11227 ms total) 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [----------] Global test environment tear-down 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [==========] 3 tests from 1 test suite ran. (11227 ms total) 2026-03-10T13:39:30.469 INFO:tasks.workunit.client.0.vm05.stdout: list: [ PASSED ] 3 tests. 2026-03-10T13:39:30.705 INFO:tasks.workunit.client.0.vm05.stdout: api_io: Running main() from gmock_main.cc 2026-03-10T13:39:30.705 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-10T13:39:30.705 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] Global test environment set-up. 2026-03-10T13:39:30.705 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (299 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: no timeout :/ 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (48 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (21 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.Checksum (4 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (13 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (8 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (7 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (2 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.TruncTest (8 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (6 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (3 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.RmXattr (14 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIo.XattrIter (10 ms) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 14 tests from LibRadosIo (443 ms total) 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-10T13:39:30.706 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (2112 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (1190 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (264 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (102 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (16 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (30 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (5 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (4 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (136 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (284 ms) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] 10 tests from LibRadosIoEC (4143 ms total) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [----------] Global test environment tear-down 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [==========] 24 tests from 2 test suites ran. (12349 ms total) 2026-03-10T13:39:30.707 INFO:tasks.workunit.client.0.vm05.stdout: api_io: [ PASSED ] 24 tests. 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout:5-91051-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.803747+0000 mon.a [INF] from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm05-92281-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.818434+0000 mon.a [INF] from='client.25222 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:25.818539+0000 mon.a [INF] from='client.25225 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm05-91018-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:26.382689+0000 client.admin [INF] threexx 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:26.382783+0000 mon.b [INF] from='client.? v1:192.168.123.105:0/3543413869' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: got: 2026-03-10T13:39:26.382935+0000 mon.a [INF] from='client.25030 ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (8813 ms) 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (12050 ms total) 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (12055 ms total) 2026-03-10T13:39:30.813 INFO:tasks.workunit.client.0.vm05.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-10T13:39:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.2 deep-scrub starts 2026-03-10T13:39:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.2 deep-scrub ok 2026-03-10T13:39:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: pgmap v55: 804 pgs: 1 active+clean+snaptrim, 13 creating+activating, 163 creating+peering, 627 active+clean; 121 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 9.9 MiB/s rd, 25 MiB/s wr, 199 op/s 2026-03-10T13:39:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.3 deep-scrub starts 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.3 deep-scrub ok 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.8 deep-scrub starts 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.8 deep-scrub ok 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.7 deep-scrub starts 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: 16.7 deep-scrub ok 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25297 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25309 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1283246557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: pool 'PoolQuotaPP_vm05-91051-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:30 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.2 deep-scrub starts 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.2 deep-scrub ok 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: pgmap v55: 804 pgs: 1 active+clean+snaptrim, 13 creating+activating, 163 creating+peering, 627 active+clean; 121 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 9.9 MiB/s rd, 25 MiB/s wr, 199 op/s 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.3 deep-scrub starts 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.3 deep-scrub ok 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.8 deep-scrub starts 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.8 deep-scrub ok 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.7 deep-scrub starts 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: 16.7 deep-scrub ok 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25297 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25309 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1283246557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: pool 'PoolQuotaPP_vm05-91051-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.2 deep-scrub starts 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.2 deep-scrub ok 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: pgmap v55: 804 pgs: 1 active+clean+snaptrim, 13 creating+activating, 163 creating+peering, 627 active+clean; 121 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 9.9 MiB/s rd, 25 MiB/s wr, 199 op/s 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.3 deep-scrub starts 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.3 deep-scrub ok 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.8 deep-scrub starts 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.8 deep-scrub ok 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.7 deep-scrub starts 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: 16.7 deep-scrub ok 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm05-92320-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/935159207' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-4", "tierpool": "test-rados-api-vm05-91276-4-cache"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24985 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm05-91544-7"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24988 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm05-91492-7"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25297 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm05-91018-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25309 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: osdmap e72: 8 total, 8 up, 8 in 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3895429635' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1283246557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25330 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: Health check update: 12 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: pool 'PoolQuotaPP_vm05-91051-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "overlaypool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24982 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm05-91079-16"}]': finished 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25330 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731411722' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: osdmap e73: 8 total, 8 up, 8 in 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:31.333 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:30 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: 16.5 deep-scrub starts 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: 16.5 deep-scrub ok 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2716639568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/746879117' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25357 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: 16.5 deep-scrub starts 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: 16.5 deep-scrub ok 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]': finished 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2716639568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/746879117' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25357 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: 16.5 deep-scrub starts 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: 16.5 deep-scrub ok 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.24961 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91659-6", "mode": "writeback"}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3042351664' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm05-91333-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2716639568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/746879117' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4280206381' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2856084969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: osdmap e74: 8 total, 8 up, 8 in 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25357 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25366 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: Running main() from gmock_main.cc 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] Global test environment set-up. 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (6414 ms) 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (7037 ms) 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (13451 ms total) 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [----------] Global test environment tear-down 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (13452 ms total) 2026-03-10T13:39:32.729 INFO:tasks.workunit.client.0.vm05.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: Running main() from gmock_main.cc 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] Global test environment set-up. 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (358 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockShared (24 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1095 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1011 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (4 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.Unlock (84 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (5 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (13 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLock (2594 ms total) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (890 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (106 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1071 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1062 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (5 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (4 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (6 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (3 ms) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3147 ms total) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [----------] Global test environment tear-down 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (14307 ms total) 2026-03-10T13:39:32.753 INFO:tasks.workunit.client.0.vm05.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: pgmap v59: 760 pgs: 236 unknown, 3 creating+activating, 37 creating+peering, 484 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 34 MiB/s wr, 251 op/s 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: 16.0 deep-scrub starts 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: 16.0 deep-scrub ok 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25357 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3842817953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25390 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: pgmap v59: 760 pgs: 236 unknown, 3 creating+activating, 37 creating+peering, 484 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 34 MiB/s wr, 251 op/s 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: 16.0 deep-scrub starts 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: 16.0 deep-scrub ok 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25357 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3842817953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25390 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:32 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: pgmap v59: 760 pgs: 236 unknown, 3 creating+activating, 37 creating+peering, 484 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail; 13 MiB/s rd, 34 MiB/s wr, 251 op/s 2026-03-10T13:39:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: 16.0 deep-scrub starts 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: 16.0 deep-scrub ok 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25357 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm05-91018-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25366 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25153 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm05-91182-10"}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25231 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm05-92281-2"}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm05-91299-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm05-91043-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: osdmap e75: 8 total, 8 up, 8 in 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3842817953' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25390 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122086402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3978117112' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:32 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: seed 91213 2026-03-10T13:39:33.777 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (355 ms) 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (31 ms) 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1140 ms) 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-10T13:39:33.779 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1038 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (15 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (6 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (5 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (5 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (2595 ms total) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (994 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (41 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1034 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1307 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (33 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (8 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (7 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (4 ms) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3428 ms total) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (15335 ms total) 2026-03-10T13:39:33.780 INFO:tasks.workunit.client.0.vm05.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: Running main() from gmock_main.cc 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] Global test environment set-up. 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolList (1660 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (2068 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2712 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (4698 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (1235 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (2034 ms) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] 6 tests from NeoRadosPools (14407 ms total) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [----------] Global test environment tear-down 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [==========] 6 tests from 1 test suite ran. (14407 ms total) 2026-03-10T13:39:33.783 INFO:tasks.workunit.client.0.vm05.stdout: pool: [ PASSED ] 6 tests. 2026-03-10T13:39:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: pgmap v62: 744 pgs: 324 unknown, 2 creating+activating, 30 creating+peering, 388 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25390 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: pgmap v62: 744 pgs: 324 unknown, 2 creating+activating, 30 creating+peering, 388 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25390 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: pgmap v62: 744 pgs: 324 unknown, 2 creating+activating, 30 creating+peering, 388 active+clean; 120 MiB data, 645 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25390 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25150 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm05-91213-10"}]': finished 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25285 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm05-92320-2"}]': finished 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: osdmap e76: 8 total, 8 up, 8 in 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:34.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:34.925 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-10T13:39:34.925 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T13:39:34.925 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-10T13:39:34.925 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1388 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (13 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1401 ms total) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (277 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (3345 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (5 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (5 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579904 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (4 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579905 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (5 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579904 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (5 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579906 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (5 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579907 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (4 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579908 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (6 ms) 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: trying... 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 317827579909 notifier_gid 25237 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: timed out 2026-03-10T13:39:34.926 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2867344954' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3445210994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1656834100' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25286 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25301 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.25304 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:35.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:35 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2867344954' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3445210994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1656834100' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25286 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25301 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.25304 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm05-91043-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm05-92320-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: osdmap e77: 8 total, 8 up, 8 in 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2867344954' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3445210994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1656834100' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25286 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25301 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.25304 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:36.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:36.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: Health check update: 9 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:36.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:35 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:36.851 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: pgmap v65: 656 pgs: 200 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 3 op/s 2026-03-10T13:39:36.851 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:36.851 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:39:36.851 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.25286 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:36.851 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.25301 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.25304 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4104850454' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.25310 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:37.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:39:37.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:39:37.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:37.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:36 vm09 ceph-mon[53367]: Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: pgmap v65: 656 pgs: 200 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 3 op/s 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.25286 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.25301 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.25304 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4104850454' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.25310 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[51512]: Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: pgmap v65: 656 pgs: 200 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 6.7 MiB/s wr, 3 op/s 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm05-91536-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/27273241' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm05-91299-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.25286 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.25301 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm05-91018-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.25304 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm05-91476-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: osdmap e78: 8 total, 8 up, 8 in 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4104850454' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.25310 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:39:37.336 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:39:37.337 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:37.337 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:36 vm05 ceph-mon[58955]: Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: pgmap v67: 792 pgs: 336 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 7.9 MiB/s wr, 9 op/s 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.25310 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:38.333 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: pgmap v67: 792 pgs: 336 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 7.9 MiB/s wr, 9 op/s 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.25310 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:38.334 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: pgmap v67: 792 pgs: 336 unknown, 456 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 7.9 MiB/s wr, 9 op/s 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm05-92320-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.25310 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: osdmap e79: 8 total, 8 up, 8 in 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:38.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_wat ] LibRadosIoECPP.RoundTripPP2 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (4 ms) 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (18 ms) 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (18 ms) 2026-03-10T13:39:39.058 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (5 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (9 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (14 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (145 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (243 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (90 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6247 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (1317 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (56 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (3 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (7 ms) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (11843 ms total) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (20729 ms total) 2026-03-10T13:39:39.059 INFO:tasks.workunit.client.0.vm05.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-10T13:39:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:39.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:39.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:39.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T13:39:39.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.428 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:39.429 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.429 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: pgmap v70: 592 pgs: 200 unknown, 392 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:39.429 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.429 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.429 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: pgmap v70: 592 pgs: 200 unknown, 392 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: osdmap e80: 8 total, 8 up, 8 in 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/505860445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: pgmap v70: 592 pgs: 200 unknown, 392 active+clean; 144 MiB data, 855 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:40.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: Running main() from gmock_main.cc 2026-03-10T13:39:40.162 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (360 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94492448783344 err -107 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (1050 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94492448783344 err -107 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (1007 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874240 cookie 94492449507536 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (7 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874240 cookie 94492449554896 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (6 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874241 cookie 94492449554896 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (6 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874241 cookie 94492449554896 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874241 cookie 94492449569792 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (14 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 279172874242 cookie 94492449569792 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 283467841539 cookie 94492449569792 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3211 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94492449579952 err -107 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_cb from 24745 notify_id 309237645315 cookie 94492449579952 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5220 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94492449579952 err -107 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (1038 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (11919 ms total) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1551 ms) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1551 ms total) 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-10T13:39:40.163 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (21502 ms total) 2026-03-10T13:39:40.164 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1012504993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.26692 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.26692 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1525638428' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[51512]: from='client.27211 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1012504993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.26692 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.26692 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1525638428' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:40 vm05 ceph-mon[58955]: from='client.27211 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.25384 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm05-91043-23"}]': finished 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: osdmap e81: 8 total, 8 up, 8 in 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1012504993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.26692 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/321496864' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]: dispatch 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.26692 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm05-91018-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1651574278' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.25277 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm05-91536-12"}]': finished 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: osdmap e82: 8 total, 8 up, 8 in 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]: dispatch 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1525638428' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:40 vm09 ceph-mon[53367]: from='client.27211 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:41.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[51512]: pgmap v72: 592 pgs: 32 creating+peering, 1 active, 4 creating+activating, 64 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:39:41.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:41.584 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[51512]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:41.585 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:41.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[58955]: pgmap v72: 592 pgs: 32 creating+peering, 1 active, 4 creating+activating, 64 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:39:41.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:41.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[58955]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:41.585 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:41 vm09 ceph-mon[53367]: pgmap v72: 592 pgs: 32 creating+peering, 1 active, 4 creating+activating, 64 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:39:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:41 vm09 ceph-mon[53367]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.27211 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3507624457' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:42 vm09 ceph-mon[53367]: from='client.28447 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.27211 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3507624457' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[51512]: from='client.28447 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-7"}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.27211 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91863-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1313996459' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: osdmap e83: 8 total, 8 up, 8 in 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: osdmap e84: 8 total, 8 up, 8 in 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3507624457' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:42 vm05 ceph-mon[58955]: from='client.28447 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: pgmap v75: 688 pgs: 1 active, 4 creating+activating, 192 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.28447 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/432609795' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:43 vm09 ceph-mon[53367]: from='client.28654 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: pgmap v75: 688 pgs: 1 active, 4 creating+activating, 192 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.28447 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/432609795' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[51512]: from='client.28654 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: pgmap v75: 688 pgs: 1 active, 4 creating+activating, 192 unknown, 491 active+clean; 144 MiB data, 873 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.28447 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm05-91051-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/653898259' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: osdmap e85: 8 total, 8 up, 8 in 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/432609795' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:43.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:43 vm05 ceph-mon[58955]: from='client.28654 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: Running main() from gmock_main.cc 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] Global test environment set-up. 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (7895 ms) 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-10T13:39:44.271 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (6546 ms) 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (10525 ms) 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] 3 tests from NeoradosECList (24966 ms total) 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [----------] Global test environment tear-down 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (24966 ms total) 2026-03-10T13:39:44.272 INFO:tasks.workunit.client.0.vm05.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[51512]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[51512]: from='client.28654 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[51512]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[58955]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[58955]: from='client.28654 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:44 vm05 ceph-mon[58955]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T13:39:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:44 vm09 ceph-mon[53367]: from='client.25423 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm05-92320-3"}]': finished 2026-03-10T13:39:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:44 vm09 ceph-mon[53367]: from='client.28654 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm05-91018-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:44 vm09 ceph-mon[53367]: osdmap e86: 8 total, 8 up, 8 in 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (3075 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3782 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (3608 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (2334 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (3101 ms) 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-10T13:39:45.392 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (3318 ms) 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (7194 ms) 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (26412 ms total) 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (26412 ms total) 2026-03-10T13:39:45.393 INFO:tasks.workunit.client.0.vm05.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: pgmap v78: 712 pgs: 4 active+clean+snaptrim, 12 creating+activating, 55 creating+peering, 87 unknown, 554 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 3.5 KiB/s rd, 3.0 KiB/s wr, 8 op/s 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]': finished 2026-03-10T13:39:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3437912240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[51512]: from='client.29978 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: pgmap v78: 712 pgs: 4 active+clean+snaptrim, 12 creating+activating, 55 creating+peering, 87 unknown, 554 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 3.5 KiB/s rd, 3.0 KiB/s wr, 8 op/s 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]': finished 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3437912240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:45.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:45 vm05 ceph-mon[58955]: from='client.29978 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: pgmap v78: 712 pgs: 4 active+clean+snaptrim, 12 creating+activating, 55 creating+peering, 87 unknown, 554 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 3.5 KiB/s rd, 3.0 KiB/s wr, 8 op/s 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4"}]': finished 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: osdmap e87: 8 total, 8 up, 8 in 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3437912240' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:45 vm09 ceph-mon[53367]: from='client.29978 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] Global test environment set-up. 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5036 ms) 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.RegisterLate (14 ms) 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: cluster: 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: id: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: health: HEALTH_WARN 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 14 pool(s) do not have an application enabled 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: services: 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 3m) 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mgr: y(active, since 76s), standbys: x 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: osd: 8 osds: 8 up (since 100s), 8 in (since 111s) 2026-03-10T13:39:46.161 INFO:tasks.workunit.client.0.vm05.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: data: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pools: 31 pools, 908 pgs 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: objects: 199 objects, 455 KiB 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: usage: 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pgs: 85.463% pgs unknown 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 776 unknown 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 132 active+clean 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: io: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: cluster: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: id: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: health: HEALTH_WARN 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 17 pool(s) do not have an application enabled 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: services: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 3m) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: mgr: y(active, since 79s), standbys: x 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: osd: 8 osds: 8 up (since 102s), 8 in (since 113s) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: data: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pools: 33 pools, 900 pgs 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: objects: 246 objects, 459 KiB 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: usage: 340 MiB used, 160 GiB / 160 GiB avail 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: pgs: 21.333% pgs unknown 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 26.333% pgs not active 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 470 active+clean 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 224 creating+peering 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 192 unknown 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 13 creating+activating 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 1 active 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: io: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: client: 2.9 KiB/s rd, 22 KiB/s wr, 139 op/s rd, 171 op/s wr 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2289 ms) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ OK ] LibRadosService.Status (20015 ms) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] 4 tests from LibRadosService (27354 ms total) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [----------] Global test environment tear-down 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27354 ms total) 2026-03-10T13:39:46.162 INFO:tasks.workunit.client.0.vm05.stdout: api_service: [ PASSED ] 4 tests. 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.29978 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3523330656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.30187 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2079048374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.30296 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.29978 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3523330656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.30187 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2079048374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.30296 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:46.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:46 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: Health check update: 8 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm05-91659-4", "tierpool": "test-rados-api-vm05-91659-6"}]': finished 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.29978 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: osdmap e88: 8 total, 8 up, 8 in 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3523330656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1520786659' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.30187 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2079048374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.30296 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:46 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout:ch_notify_pp: flushed 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3012 ms) 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: trying... 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 330712481797 notifier_gid 25237 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: timed out 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushed 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3175 ms) 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: List watches 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095927008 notify_id 343597383686 notifier_gid 25237 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 done 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: watch_check 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: unwatch2 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: done 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3517 ms) 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: List watches 2026-03-10T13:39:48.196 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: handle_notify cookie 93996095948912 notify_id 356482285575 notifier_gid 25237 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: notify2 done 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: watch_check 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: unwatch2 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: flushing 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: done 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3092 ms) 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (16457 ms total) 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (29462 ms total) 2026-03-10T13:39:48.197 INFO:tasks.workunit.client.0.vm05.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: pgmap v82: 552 pgs: 4 active+clean+snaptrim, 9 creating+peering, 119 unknown, 420 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.30187 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.30296 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: pgmap v82: 552 pgs: 4 active+clean+snaptrim, 9 creating+peering, 119 unknown, 420 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.30187 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.30296 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:48.335 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: pgmap v82: 552 pgs: 4 active+clean+snaptrim, 9 creating+peering, 119 unknown, 420 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 1.0 KiB/s wr, 4 op/s 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-9", "mode": "writeback"}]': finished 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.30187 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm05-91051-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.25237 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm05-91659-6", "pool2": "test-rados-api-vm05-91659-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.30296 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm05-91018-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: osdmap e89: 8 total, 8 up, 8 in 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:39:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:48.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:48.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: pgmap v84: 708 pgs: 4 active+clean+snaptrim, 9 creating+peering, 279 unknown, 416 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:49.603 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: pgmap v84: 708 pgs: 4 active+clean+snaptrim, 9 creating+peering, 279 unknown, 416 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:49.603 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:49.603 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:49.603 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T13:39:49.603 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:49.604 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:49.604 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: pgmap v84: 708 pgs: 4 active+clean+snaptrim, 9 creating+peering, 279 unknown, 416 active+clean; 144 MiB data, 934 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: osdmap e90: 8 total, 8 up, 8 in 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]: dispatch 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2562683478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.31664 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.31664 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3724713198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.290 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2562683478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.31664 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.31664 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3724713198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm05-91333-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-9"}]': finished 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: osdmap e91: 8 total, 8 up, 8 in 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2562683478' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.31664 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.31664 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm05-91051-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: osdmap e92: 8 total, 8 up, 8 in 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3724713198' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:50.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (5073 ms) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (83 ms) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20022 ms) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: attempt 0 of 20 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (7234 ms) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (32412 ms total) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (32412 ms total) 2026-03-10T13:39:51.268 INFO:tasks.workunit.client.0.vm05.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: pgmap v87: 492 pgs: 4 active, 11 creating+activating, 9 creating+peering, 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 40 unknown, 418 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 242 B/s wr, 0 op/s 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.32485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.32485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2633293571' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: pgmap v87: 492 pgs: 4 active, 11 creating+activating, 9 creating+peering, 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 40 unknown, 418 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 242 B/s wr, 0 op/s 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.32485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.32485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2633293571' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: pgmap v87: 492 pgs: 4 active, 11 creating+activating, 9 creating+peering, 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 40 unknown, 418 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 242 B/s wr, 0 op/s 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.32485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.32485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm05-91018-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91411-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: osdmap e93: 8 total, 8 up, 8 in 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2633293571' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:52 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T13:39:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: osdmap e94: 8 total, 8 up, 8 in 2026-03-10T13:39:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T13:39:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:39:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:52 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:39:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: pgmap v91: 556 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 168 unknown, 378 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 3.8 KiB/s wr, 6 op/s 2026-03-10T13:39:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]': finished 2026-03-10T13:39:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3696683191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/154306994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.33605 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.33824 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: pgmap v91: 556 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 168 unknown, 378 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 3.8 KiB/s wr, 6 op/s 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]': finished 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3696683191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/154306994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.33605 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.33824 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:53.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: pgmap v91: 556 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 168 unknown, 378 active+clean; 144 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 3.8 KiB/s wr, 6 op/s 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache", "force_nonempty":""}]': finished 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: osdmap e95: 8 total, 8 up, 8 in 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3696683191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/154306994' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.33605 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.33824 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:39:54.096 INFO:tasks.workunit.client.0.vm05.stdout: misc: Running main() from gmock_main.cc 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] Global test environment set-up. 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Version (1639 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (2088 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongName (5186 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (2451 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (3054 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (3194 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Exec (3142 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (3171 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (2610 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.BigObject (3316 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (1895 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (2989 ms) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] 12 tests from NeoRadosMisc (34735 ms total) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [----------] Global test environment tear-down 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [==========] 12 tests from 1 test suite ran. (34735 ms total) 2026-03-10T13:39:54.097 INFO:tasks.workunit.client.0.vm05.stdout: misc: [ PASSED ] 12 tests. 2026-03-10T13:39:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.33605 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.33824 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: pgmap v93: 684 pgs: 128 creating+peering, 32 creating+activating, 2 active+clean+snaptrim, 522 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.33605 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.33824 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: pgmap v93: 684 pgs: 128 creating+peering, 32 creating+activating, 2 active+clean+snaptrim, 522 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:55.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:55 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91411-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.33605 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm05-91051-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.33824 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm05-91018-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: osdmap e96: 8 total, 8 up, 8 in 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4205237495' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: pgmap v93: 684 pgs: 128 creating+peering, 32 creating+activating, 2 active+clean+snaptrim, 522 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:55 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]: dispatch 2026-03-10T13:39:56.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/814263730' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[51512]: from='client.36295 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]': finished 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/814263730' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:56 vm05 ceph-mon[58955]: from='client.36295 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.32494 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91411-10", "tierpool":"test-rados-api-vm05-91411-10-cache"}]': finished 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: osdmap e97: 8 total, 8 up, 8 in 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]: dispatch 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-11", "mode": "writeback"}]': finished 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: osdmap e98: 8 total, 8 up, 8 in 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/814263730' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:56 vm09 ceph-mon[53367]: from='client.36295 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: Running main() from gmock_main.cc 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] Global test environment set-up. 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolList (2427 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (3297 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (3923 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (2441 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (3052 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5425 ms) 2026-03-10T13:39:57.217 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (5100 ms) 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (4902 ms) 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (8026 ms) 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] 9 tests from LibRadosPools (38593 ms total) 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [----------] Global test environment tear-down 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (38593 ms total) 2026-03-10T13:39:57.218 INFO:tasks.workunit.client.0.vm05.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-10T13:39:57.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:57 vm05 ceph-mon[51512]: pgmap v97: 588 pgs: 96 unknown, 12 creating+activating, 2 active+clean+snaptrim, 478 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 258 KiB/s wr, 5 op/s 2026-03-10T13:39:57.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:57.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:57 vm05 ceph-mon[58955]: pgmap v97: 588 pgs: 96 unknown, 12 creating+activating, 2 active+clean+snaptrim, 478 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 258 KiB/s wr, 5 op/s 2026-03-10T13:39:57.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:57 vm09 ceph-mon[53367]: pgmap v97: 588 pgs: 96 unknown, 12 creating+activating, 2 active+clean+snaptrim, 478 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 258 KiB/s wr, 5 op/s 2026-03-10T13:39:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[51512]: from='client.36295 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[51512]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T13:39:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[58955]: from='client.36295 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[58955]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:58.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:58 vm09 ceph-mon[53367]: from='client.36295 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:58.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:58 vm09 ceph-mon[53367]: osdmap e99: 8 total, 8 up, 8 in 2026-03-10T13:39:58.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:58.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:39:58.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:39:58.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:39:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: pgmap v99: 556 pgs: 128 unknown, 2 active+clean+snaptrim, 426 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: pgmap v99: 556 pgs: 128 unknown, 2 active+clean+snaptrim, 426 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:59.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:39:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: pgmap v99: 556 pgs: 128 unknown, 2 active+clean+snaptrim, 426 active+clean; 145 MiB data, 914 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1446433005' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm05-91018-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: osdmap e100: 8 total, 8 up, 8 in 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]: dispatch 2026-03-10T13:39:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:39:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:39:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:39:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.836 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.8 deep-scrub starts 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.8 deep-scrub ok 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.4 deep-scrub starts 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.4 deep-scrub ok 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: stray daemon laundry.pid91764 on host vm05 not managed by cephadm 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: stray daemon laundry.pid91824 on host vm05 not managed by cephadm 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: [WRN] POOL_APP_NOT_ENABLED: 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'LibRadosSnapshotsEC_vm05-91333-10' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'XattrsRoundTripvm05-92184-12' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'OmapNulsvm05-92423-12' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: application not enabled on pool 'IsComplete_vm05-91018-12' 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.3 deep-scrub starts 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: 16.3 deep-scrub ok 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1576133046' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.38485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1691070436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[51512]: from='client.38767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.837 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.8 deep-scrub starts 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.8 deep-scrub ok 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.4 deep-scrub starts 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.4 deep-scrub ok 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: stray daemon laundry.pid91764 on host vm05 not managed by cephadm 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: stray daemon laundry.pid91824 on host vm05 not managed by cephadm 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: [WRN] POOL_APP_NOT_ENABLED: 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'LibRadosSnapshotsEC_vm05-91333-10' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'XattrsRoundTripvm05-92184-12' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'OmapNulsvm05-92423-12' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: application not enabled on pool 'IsComplete_vm05-91018-12' 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.3 deep-scrub starts 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: 16.3 deep-scrub ok 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1576133046' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.38485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1691070436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.838 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:00 vm05 ceph-mon[58955]: from='client.38767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-11"}]': finished 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: osdmap e101: 8 total, 8 up, 8 in 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.0"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.1"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.2"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.3"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.4"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.5"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.6"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.7"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.8"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.109:0/3702463149' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "16.9"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.8 deep-scrub starts 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.8 deep-scrub ok 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.4 deep-scrub starts 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.4 deep-scrub ok 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: Health detail: HEALTH_WARN 2 stray daemon(s) not managed by cephadm; 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: [WRN] CEPHADM_STRAY_DAEMON: 2 stray daemon(s) not managed by cephadm 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: stray daemon laundry.pid91764 on host vm05 not managed by cephadm 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: stray daemon laundry.pid91824 on host vm05 not managed by cephadm 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: [WRN] POOL_APP_NOT_ENABLED: 7 pool(s) do not have an application enabled 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'LibRadosSnapshotsEC_vm05-91333-10' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'XattrsRoundTripvm05-92184-12' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'OmapNulsvm05-92423-12' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: application not enabled on pool 'IsComplete_vm05-91018-12' 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.3 deep-scrub starts 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: 16.3 deep-scrub ok 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: osdmap e102: 8 total, 8 up, 8 in 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1576133046' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.38485 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/731513293' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1691070436' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]: dispatch 2026-03-10T13:40:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:00 vm09 ceph-mon[53367]: from='client.38767 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.6 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.6 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.2 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.2 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: pgmap v102: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 82 unknown, 4 active+clean+snaptrim, 411 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 758 B/s wr, 4 op/s 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.1 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.1 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.9 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.9 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.5 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.5 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.7 deep-scrub starts 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: 16.7 deep-scrub ok 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: from='client.38485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:01.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: from='client.38767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.6 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.6 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.2 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.2 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: pgmap v102: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 82 unknown, 4 active+clean+snaptrim, 411 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 758 B/s wr, 4 op/s 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.1 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.1 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.9 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.9 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.5 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.5 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.7 deep-scrub starts 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: 16.7 deep-scrub ok 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: from='client.38485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: from='client.38767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:01.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.6 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.6 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.2 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.2 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: pgmap v102: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 82 unknown, 4 active+clean+snaptrim, 411 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 758 B/s wr, 4 op/s 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.1 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.1 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.9 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.9 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.5 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.5 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.7 deep-scrub starts 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: 16.7 deep-scrub ok 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: from='client.38485 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm05-91051-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: from='client.30638 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm05-91333-10"}]': finished 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: from='client.38767 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm05-91018-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: osdmap e103: 8 total, 8 up, 8 in 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: 16.0 deep-scrub starts 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: 16.0 deep-scrub ok 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: pgmap v106: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 114 unknown, 2 active+clean+snaptrim, 381 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: 16.0 deep-scrub starts 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: 16.0 deep-scrub ok 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: pgmap v106: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 114 unknown, 2 active+clean+snaptrim, 381 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: 16.0 deep-scrub starts 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: 16.0 deep-scrub ok 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: osdmap e104: 8 total, 8 up, 8 in 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: pgmap v106: 516 pgs: 5 active+clean+snaptrim_wait, 14 creating+peering, 114 unknown, 2 active+clean+snaptrim, 381 active+clean; 144 MiB data, 918 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: Running main() from gmock_main.cc 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] Global test environment set-up. 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Limits (2871 ms) 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3684 ms) 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.ReadOp (3611 ms) 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.SparseRead (2344 ms) 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-10T13:40:04.220 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.RoundTrip (3097 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3236 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (4152 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3149 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (3820 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Trunc (2923 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.Remove (3054 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (3216 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.RmXattr (2822 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3014 ms) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] 14 tests from NeoRadosIo (44993 ms total) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [----------] Global test environment tear-down 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [==========] 14 tests from 1 test suite ran. (44993 ms total) 2026-03-10T13:40:04.221 INFO:tasks.workunit.client.0.vm05.stdout: io: [ PASSED ] 14 tests. 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: Running main() from gmock_main.cc 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] Global test environment set-up. 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (2617 ms) 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3791 ms) 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (3597 ms) 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (2351 ms) 2026-03-10T13:40:04.225 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (3095 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3310 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (4088 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (3118 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (3847 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (2900 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (3070 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (3195 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (2720 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (3139 ms) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (44838 ms total) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [----------] Global test environment tear-down 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (44838 ms total) 2026-03-10T13:40:04.226 INFO:tasks.workunit.client.0.vm05.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-10T13:40:04.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[51512]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:04 vm05 ceph-mon[58955]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: osdmap e105: 8 total, 8 up, 8 in 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:04 vm09 ceph-mon[53367]: osdmap e106: 8 total, 8 up, 8 in 2026-03-10T13:40:04.673 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:40:04 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=infra.usagestats t=2026-03-10T13:40:04.528302308Z level=info msg="Usage stats are ready to report" 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: pgmap v109: 516 pgs: 64 unknown, 2 active+clean+snaptrim, 450 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 19 MiB/s wr, 46 op/s 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2848297597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3488845697' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.40504 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.40226 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.40504 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: from='client.40226 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[58955]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: pgmap v109: 516 pgs: 64 unknown, 2 active+clean+snaptrim, 450 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 19 MiB/s wr, 46 op/s 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2848297597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3488845697' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.40504 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.40226 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.40504 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: from='client.40226 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:05 vm05 ceph-mon[51512]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: pgmap v109: 516 pgs: 64 unknown, 2 active+clean+snaptrim, 450 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 19 MiB/s wr, 46 op/s 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2848297597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3488845697' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.40504 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.40226 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm05-91333-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.40504 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm05-91051-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: from='client.40226 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm05-91018-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:05.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:05 vm09 ceph-mon[53367]: osdmap e107: 8 total, 8 up, 8 in 2026-03-10T13:40:06.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:06.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:06 vm05 ceph-mon[51512]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T13:40:06.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:06.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:06 vm05 ceph-mon[58955]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T13:40:06.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:06.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:06 vm09 ceph-mon[53367]: osdmap e108: 8 total, 8 up, 8 in 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: pgmap v111: 492 pgs: 72 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 38 op/s 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T13:40:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: pgmap v111: 492 pgs: 72 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 38 op/s 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: pgmap v111: 492 pgs: 72 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 18 MiB/s wr, 38 op/s 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: osdmap e109: 8 total, 8 up, 8 in 2026-03-10T13:40:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4241252719' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3235538318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.42266 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.42413 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.42266 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.42413 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T13:40:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]: dispatch 2026-03-10T13:40:08.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4241252719' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3235538318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.42266 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.42413 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.42266 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.42413 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4241252719' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3235538318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.42266 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.42413 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.42266 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm05-91051-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.42413 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm05-91018-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: osdmap e110: 8 total, 8 up, 8 in 2026-03-10T13:40:08.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: pgmap v114: 524 pgs: 104 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]': finished 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: pgmap v114: 524 pgs: 104 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]': finished 2026-03-10T13:40:09.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T13:40:09.834 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: pgmap v114: 524 pgs: 104 unknown, 1 active+clean+snaptrim, 419 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-13", "mode": "writeback"}]': finished 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: osdmap e111: 8 total, 8 up, 8 in 2026-03-10T13:40:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:10.334 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:10.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: osdmap e112: 8 total, 8 up, 8 in 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: pgmap v117: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 348 KiB/s wr, 90 op/s 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.2 deep-scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.2 deep-scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.4 scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.4 scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.3 scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: 162.3 scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: pgmap v117: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 348 KiB/s wr, 90 op/s 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.2 deep-scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.2 deep-scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.4 scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.4 scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.3 scrub starts 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: 162.3 scrub ok 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:11.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: pgmap v117: 460 pgs: 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 348 KiB/s wr, 90 op/s 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.0"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.1"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.2"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.3"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.4"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.5"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.6"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.7"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.8"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "162.9"}]: dispatch 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.2 deep-scrub starts 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.2 deep-scrub ok 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.4 scrub starts 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.4 scrub ok 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.3 scrub starts 2026-03-10T13:40:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: 162.3 scrub ok 2026-03-10T13:40:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3882516216' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm05-91018-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: osdmap e113: 8 total, 8 up, 8 in 2026-03-10T13:40:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.7 scrub starts 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.7 scrub ok 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.1 scrub starts 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.1 scrub ok 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.6 deep-scrub starts 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: 162.6 deep-scrub ok 2026-03-10T13:40:12.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.7 scrub starts 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.7 scrub ok 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.1 scrub starts 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.1 scrub ok 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.6 deep-scrub starts 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: 162.6 deep-scrub ok 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T13:40:12.833 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.7 scrub starts 2026-03-10T13:40:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.7 scrub ok 2026-03-10T13:40:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.1 scrub starts 2026-03-10T13:40:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.1 scrub ok 2026-03-10T13:40:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.6 deep-scrub starts 2026-03-10T13:40:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: 162.6 deep-scrub ok 2026-03-10T13:40:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3462532480' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm05-91051-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: osdmap e114: 8 total, 8 up, 8 in 2026-03-10T13:40:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: pgmap v120: 492 pgs: 64 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 92 KiB/s wr, 89 op/s 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: 162.5 deep-scrub starts 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: 162.5 deep-scrub ok 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: 162.0 scrub starts 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: 162.0 scrub ok 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: pgmap v120: 492 pgs: 64 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 92 KiB/s wr, 89 op/s 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: 162.5 deep-scrub starts 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: 162.5 deep-scrub ok 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: 162.0 scrub starts 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: 162.0 scrub ok 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:13.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: pgmap v120: 492 pgs: 64 unknown, 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 92 KiB/s wr, 89 op/s 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: 162.5 deep-scrub starts 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: 162.5 deep-scrub ok 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: 162.0 scrub starts 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: 162.0 scrub ok 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: osdmap e115: 8 total, 8 up, 8 in 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: 162.9 scrub starts 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: 162.9 scrub ok 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: 162.9 scrub starts 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: 162.9 scrub ok 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: 162.9 scrub starts 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: 162.9 scrub ok 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/966089298' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm05-91018-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: osdmap e116: 8 total, 8 up, 8 in 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: pgmap v123: 492 pgs: 7 creating+activating, 33 creating+peering, 3 unknown, 449 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: 162.8 scrub starts 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: 162.8 scrub ok 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: pgmap v123: 492 pgs: 7 creating+activating, 33 creating+peering, 3 unknown, 449 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: 162.8 scrub starts 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: 162.8 scrub ok 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: pgmap v123: 492 pgs: 7 creating+activating, 33 creating+peering, 3 unknown, 449 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: 162.8 scrub starts 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: 162.8 scrub ok 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1946591265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: osdmap e117: 8 total, 8 up, 8 in 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]: dispatch 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (1965 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (2353 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (3191 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (1845 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (9354 ms total) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4274 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (4330 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (5234 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13838 ms total) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (2909 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (2025 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (1924 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (2330 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (9188 ms total) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (4217 ms) 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-10T13:40:16.223 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (3917 ms) 2026-03-10T13:40:16.224 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (8134 ms total) 2026-03-10T13:40:16.224 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: 2026-03-10T13:40:16.224 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-10T13:40:16.224 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (57706 ms total) 2026-03-10T13:40:16.224 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: pgmap v126: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: pgmap v126: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: pgmap v126: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: from='client.39967 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm05-91333-15"}]': finished 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/367213213' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm05-91051-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: osdmap e118: 8 total, 8 up, 8 in 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:18 vm09 ceph-mon[53367]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T13:40:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2431479961' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:18 vm09 ceph-mon[53367]: from='client.47543 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:18.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:18.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[51512]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2431479961' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[51512]: from='client.47543 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2784429206' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm05-91018-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[58955]: osdmap e119: 8 total, 8 up, 8 in 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2431479961' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[58955]: from='client.47543 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:18.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: pgmap v129: 484 pgs: 96 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: from='client.47543 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: pgmap v129: 484 pgs: 96 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: from='client.47543 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T13:40:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: pgmap v129: 484 pgs: 96 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: from='client.47543 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91476-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: osdmap e120: 8 total, 8 up, 8 in 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] Global test environment set-up. 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:39:18.629+0000 7f86bf165880 -1 failed for service _ceph-mon._tcp 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:39:18.629+0000 7f86bf165880 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (85 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5010 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5095 ms total) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 0x7f86a4068ab0 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 0x562297a847d0 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: started 2 aios 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: waiting 0x7f86a4068ab0 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: waiting 0x562297a847d0 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: done. 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (5729 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (5729 ms total) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.Exec (97 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (100 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (20 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.Applications (4952 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (43823 ms) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] 8 tests from LibRadosMisc (48992 ms total) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [----------] Global test environment tear-down 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (61877 ms total) 2026-03-10T13:40:20.438 INFO:tasks.workunit.client.0.vm05.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3827719899' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: from='client.49444 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: from='client.49444 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[58955]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3827719899' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: from='client.49444 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: from='client.49444 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:20 vm05 ceph-mon[51512]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2443432378' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm05-91051-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: osdmap e121: 8 total, 8 up, 8 in 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3827719899' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: from='client.49444 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: from='client.49444 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm05-91018-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:20 vm09 ceph-mon[53367]: osdmap e122: 8 total, 8 up, 8 in 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: pgmap v132: 516 pgs: 28 creating+peering, 53 unknown, 435 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1455378294' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.50137 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: pgmap v132: 516 pgs: 28 creating+peering, 53 unknown, 435 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1455378294' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.50137 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: pgmap v132: 516 pgs: 28 creating+peering, 53 unknown, 435 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: osdmap e123: 8 total, 8 up, 8 in 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1455378294' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.50137 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: pgmap v135: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.50137 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4123174219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: pgmap v135: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.50137 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4123174219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: pgmap v135: 420 pgs: 32 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.50137 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm05-91051-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: osdmap e124: 8 total, 8 up, 8 in 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4123174219' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.50146 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: from='client.50158 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1322604864' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: from='client.50158 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1322604864' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: from='client.50146 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91018-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: osdmap e125: 8 total, 8 up, 8 in 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm05-91476-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: osdmap e126: 8 total, 8 up, 8 in 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: from='client.50158 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1322604864' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[58955]: pgmap v138: 420 pgs: 11 creating+peering, 409 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[58955]: from='client.50158 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[58955]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[51512]: pgmap v138: 420 pgs: 11 creating+peering, 409 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[51512]: from='client.50158 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:25 vm05 ceph-mon[51512]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T13:40:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:25 vm09 ceph-mon[53367]: pgmap v138: 420 pgs: 11 creating+peering, 409 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:40:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:25 vm09 ceph-mon[53367]: from='client.50158 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:25 vm09 ceph-mon[53367]: osdmap e127: 8 total, 8 up, 8 in 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818948609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: from='client.49685 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: from='client.49685 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T13:40:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818948609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: from='client.49685 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: from='client.49685 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818948609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: from='client.49685 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: from='client.49685 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm05-91018-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: osdmap e128: 8 total, 8 up, 8 in 2026-03-10T13:40:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:27 vm09 ceph-mon[53367]: pgmap v142: 428 pgs: 40 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:40:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:27 vm09 ceph-mon[53367]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T13:40:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3043190978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:27 vm09 ceph-mon[53367]: from='client.49691 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[58955]: pgmap v142: 428 pgs: 40 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[58955]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3043190978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[58955]: from='client.49691 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[51512]: pgmap v142: 428 pgs: 40 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[51512]: osdmap e129: 8 total, 8 up, 8 in 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3043190978' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[51512]: from='client.49691 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:28.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: from='client.49691 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: pgmap v145: 460 pgs: 72 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: from='client.49691 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: pgmap v145: 460 pgs: 72 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: from='client.49691 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm05-91051-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: osdmap e130: 8 total, 8 up, 8 in 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: pgmap v145: 460 pgs: 72 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938880873' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm05-91018-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: osdmap e131: 8 total, 8 up, 8 in 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: seed 91476 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (2002 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (2234 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (3274 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (1770 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (3382 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (12662 ms total) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4408 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (4010 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (5924 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (3925 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (2105 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: /ceph/rpmbuild/BUILD/ceph-19.2.3-678-ge911bdeb/src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (0 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm05-91476-7 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (18167 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (38539 ms total) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (9 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (6174 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (6183 ms total) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (2667 ms) 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-10T13:40:31.110 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (1998 ms) 2026-03-10T13:40:31.111 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: pgmap v147: 428 pgs: 32 creating+peering, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2"}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3163719931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.50176 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.50176 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:31 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: pgmap v147: 428 pgs: 32 creating+peering, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2"}]: dispatch 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3163719931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.50176 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.50176 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: pgmap v147: 428 pgs: 32 creating+peering, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 508 B/s wr, 1 op/s 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: osdmap e132: 8 total, 8 up, 8 in 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3163719931' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.50176 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.50176 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm05-91051-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm05-91340-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: osdmap e133: 8 total, 8 up, 8 in 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:31 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:32 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2670792538' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm05-91018-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: osdmap e134: 8 total, 8 up, 8 in 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:32 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: pgmap v151: 428 pgs: 32 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/981227272' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:33 vm09 ceph-mon[53367]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: pgmap v151: 428 pgs: 32 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/981227272' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[58955]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: pgmap v151: 428 pgs: 32 unknown, 396 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:40:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: osdmap e135: 8 total, 8 up, 8 in 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/346898878' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/981227272' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T13:40:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:33 vm05 ceph-mon[51512]: from='client.50188 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4093504803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:34 vm09 ceph-mon[53367]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4093504803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[58955]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.24703 ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm05-91340-1","app":"app1","key":"key1"}]': finished 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.50188 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm05-91051-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: osdmap e136: 8 total, 8 up, 8 in 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4093504803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:34 vm05 ceph-mon[51512]: from='client.50191 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.Rollb api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: seed 91340 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (3 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (377 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (23 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (20 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (17 ms) 2026-03-10T13:40:35.127 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (2 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (3 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (5 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (3 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (15 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (3 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (6 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (10 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (4415 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (1027 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: done waiting, doing copies 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: done waiting 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (63497 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (6 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (4667 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-10T13:40:35.128 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-10T13:40:35.143 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 22 tests from LibRadosMisc snapshots: Running main() from gmock_main.cc 2026-03-10T13:40:35.143 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-10T13:40:35.143 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] Global test environment set-up. 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (4911 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (6162 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (3294 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5308 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (6848 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (26523 ms total) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (5176 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (6172 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (7951 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (6059 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (3969 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm05-92426-11. 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: Waiting for snaps to purge. 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (19889 ms) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (49216 ms total) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [----------] Global test environment tear-down 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (75742 ms total) 2026-03-10T13:40:35.144 INFO:tasks.workunit.client.0.vm05.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: pgmap v153: 428 pgs: 32 creating+peering, 1 active+clean+snaptrim, 395 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T13:40:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:35 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: pgmap v153: 428 pgs: 32 creating+peering, 1 active+clean+snaptrim, 395 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: pgmap v153: 428 pgs: 32 creating+peering, 1 active+clean+snaptrim, 395 active+clean; 216 MiB data, 1.3 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: from='client.50191 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm05-91018-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3296376835' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: osdmap e137: 8 total, 8 up, 8 in 2026-03-10T13:40:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:35 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:36 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.50152 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm05-91476-16"}]': finished 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91340-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: osdmap e138: 8 total, 8 up, 8 in 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:36 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: pgmap v156: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: pgmap v156: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T13:40:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:37 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: pgmap v156: 356 pgs: 32 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/861368193' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm05-91051-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: osdmap e139: 8 total, 8 up, 8 in 2026-03-10T13:40:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:37 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[58955]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T13:40:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:38 vm05 ceph-mon[51512]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91340-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2836625770' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm05-91018-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:38 vm09 ceph-mon[53367]: osdmap e140: 8 total, 8 up, 8 in 2026-03-10T13:40:38.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: pgmap v159: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/761116255' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[58955]: from='client.49733 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: pgmap v159: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/761116255' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:39 vm05 ceph-mon[51512]: from='client.49733 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: pgmap v159: 388 pgs: 64 unknown, 324 active+clean; 464 KiB data, 1.3 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm05-91476-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: osdmap e141: 8 total, 8 up, 8 in 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/761116255' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:39 vm09 ceph-mon[53367]: from='client.49733 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: from='client.49733 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: from='client.49733 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: from='client.49733 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm05-91051-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: osdmap e142: 8 total, 8 up, 8 in 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: pgmap v162: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[58955]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: pgmap v162: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:41 vm05 ceph-mon[51512]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T13:40:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: pgmap v162: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:40:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1846009591' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm05-91018-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91340-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:41 vm09 ceph-mon[53367]: osdmap e143: 8 total, 8 up, 8 in 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/400845998' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.49739 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/400845998' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.49739 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: osdmap e144: 8 total, 8 up, 8 in 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/400845998' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.49739 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: pgmap v165: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: from='client.49739 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: pgmap v165: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: from='client.49739 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: pgmap v165: 404 pgs: 7 creating+peering, 73 unknown, 324 active+clean; 464 KiB data, 685 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: from='client.49739 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm05-91051-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm05-91018-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: osdmap e145: 8 total, 8 up, 8 in 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: pgmap v168: 364 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 19 creating+peering, 318 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: pgmap v168: 364 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 19 creating+peering, 318 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: pgmap v168: 364 pgs: 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 19 creating+peering, 318 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: osdmap e146: 8 total, 8 up, 8 in 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]: dispatch 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T13:40:46.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:46.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1250564153' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.49742 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:46.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:46 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1250564153' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.49742 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm05-91018-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1121457237' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91340-24"}]': finished 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: osdmap e147: 8 total, 8 up, 8 in 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1250564153' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.49742 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:46 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: pgmap v171: 372 pgs: 40 unknown, 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 305 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: from='client.49742 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2458086456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: pgmap v171: 372 pgs: 40 unknown, 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 305 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: from='client.49742 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2458086456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: pgmap v171: 372 pgs: 40 unknown, 6 active+clean+snaptrim, 21 active+clean+snaptrim_wait, 305 active+clean; 478 KiB data, 696 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 1 op/s 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-13"}]': finished 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: from='client.49742 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm05-91051-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: osdmap e148: 8 total, 8 up, 8 in 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2458086456' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: from='client.50230 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:48 vm09 ceph-mon[53367]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:48 vm09 ceph-mon[53367]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T13:40:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:48.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[58955]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[58955]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[51512]: from='client.50230 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm05-91340-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[51512]: osdmap e149: 8 total, 8 up, 8 in 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: Running main() from gmock_main.cc 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] Global test environment set-up. 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.TooBig (2678 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (3363 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (3922 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (2444 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (3043 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (3216 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (4249 ms) 2026-03-10T13:40:49.417 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (4157 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (2800 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (3917 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (3078 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (4198 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (2758 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (4208 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.Flush (3018 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (2881 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (3013 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3122 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (3033 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3030 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (2656 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (3057 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (2963 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (3045 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3030 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3159 ms) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 26 tests from LibRadosAio (84039 ms total) 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-10T13:40:49.418 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: pgmap v174: 332 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 476 KiB data, 696 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/167938318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.49748 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.49748 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T13:40:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3859339413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: pgmap v174: 332 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 476 KiB data, 696 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/167938318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.49748 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.49748 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T13:40:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3859339413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: pgmap v174: 332 pgs: 32 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 476 KiB data, 696 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: osdmap e150: 8 total, 8 up, 8 in 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/167938318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.49748 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1163440283' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm05-91018-27"}]': finished 2026-03-10T13:40:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.49748 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm05-91051-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: osdmap e151: 8 total, 8 up, 8 in 2026-03-10T13:40:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3859339413' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:50 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.50239 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.50239 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm05-91340-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm05-91018-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: osdmap e152: 8 total, 8 up, 8 in 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:50 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: pgmap v177: 396 pgs: 35 creating+peering, 61 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 8.0 KiB/s wr, 1 op/s 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/586254933' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[58955]: from='client.50242 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: pgmap v177: 396 pgs: 35 creating+peering, 61 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 8.0 KiB/s wr, 1 op/s 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/586254933' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:51 vm05 ceph-mon[51512]: from='client.50242 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: pgmap v177: 396 pgs: 35 creating+peering, 61 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 8.0 KiB/s wr, 1 op/s 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: osdmap e153: 8 total, 8 up, 8 in 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/586254933' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:51 vm09 ceph-mon[53367]: from='client.50242 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.50242 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3369361519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.50242 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T13:40:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3369361519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm05-91018-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.50242 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm05-91051-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: osdmap e154: 8 total, 8 up, 8 in 2026-03-10T13:40:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3369361519' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: pgmap v180: 364 pgs: 14 creating+peering, 50 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 8.0 KiB/s wr, 3 op/s 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.50248 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]': finished 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.50248 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: pgmap v180: 364 pgs: 14 creating+peering, 50 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 8.0 KiB/s wr, 3 op/s 2026-03-10T13:40:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.50248 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]': finished 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.50248 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T13:40:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:53 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: pgmap v180: 364 pgs: 14 creating+peering, 50 unknown, 6 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 284 active+clean; 483 KiB data, 698 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 8.0 KiB/s wr, 3 op/s 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.50248 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-15", "mode": "writeback"}]': finished 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.50248 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm05-91340-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2073370070' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: osdmap e155: 8 total, 8 up, 8 in 2026-03-10T13:40:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:53 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]: dispatch 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: ackPP (2004 ms) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (2001 ms) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (8670 ms total) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4202 ms) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (3994 ms) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4029 ms) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (12225 ms total) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (95802 ms total) 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-10T13:40:54.438 INFO:tasks.workunit.client.0.vm05.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: from='client.49724 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm05-91476-21"}]': finished 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: osdmap e156: 8 total, 8 up, 8 in 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: pgmap v183: 364 pgs: 19 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 463 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: pgmap v183: 364 pgs: 19 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 463 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: pgmap v183: 364 pgs: 19 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 463 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3591861300' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm05-91051-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2725338855' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: osdmap e157: 8 total, 8 up, 8 in 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]: dispatch 2026-03-10T13:40:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:56.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:56.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:56 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.49751 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm05-91018-28"}]': finished 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-15"}]': finished 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: osdmap e158: 8 total, 8 up, 8 in 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:56 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: pgmap v186: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2731172461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.49781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: pgmap v186: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2731172461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.49781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:57 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: pgmap v186: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm05-91340-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm05-91018-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2731172461' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: osdmap e159: 8 total, 8 up, 8 in 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.49781 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:57 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:40:58.601 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:40:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:40:58.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: pgmap v189: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.49781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:40:59 vm09 ceph-mon[53367]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: pgmap v189: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.49781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[58955]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: pgmap v189: 332 pgs: 40 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 702 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.49781 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm05-91051-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: osdmap e160: 8 total, 8 up, 8 in 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:40:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:40:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm05-91018-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:40:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:40:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:40:59 vm05 ceph-mon[51512]: osdmap e161: 8 total, 8 up, 8 in 2026-03-10T13:41:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:40:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:40:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2183005815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:00 vm09 ceph-mon[53367]: from='client.49787 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2183005815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[58955]: from='client.49787 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/445603206' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: osdmap e162: 8 total, 8 up, 8 in 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2183005815' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]: dispatch 2026-03-10T13:41:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:00 vm05 ceph-mon[51512]: from='client.49787 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout:PP (74102 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (174 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (174 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (52 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (26 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (78 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (60 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (10 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (70 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (65 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (2 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (67 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1135 ms) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1135 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (103122 ms total) 2026-03-10T13:41:01.661 INFO:tasks.workunit.client.0.vm05.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: pgmap v192: 332 pgs: 11 creating+peering, 29 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: from='client.49787 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T13:41:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: pgmap v192: 332 pgs: 11 creating+peering, 29 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: from='client.49787 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T13:41:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: pgmap v192: 332 pgs: 11 creating+peering, 29 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: from='client.50254 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm05-91340-36"}]': finished 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: from='client.49787 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm05-91051-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: osdmap e163: 8 total, 8 up, 8 in 2026-03-10T13:41:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (2633 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (17159 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (6129 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3935 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (2962 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (4017 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (3181 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3767 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (4098 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (3125 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (3821 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (4076 ms) 2026-03-10T13:41:02.820 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (3134 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (3056 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (2649 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (3011 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (3015 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (3006 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (3091 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (2996 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (3167 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3030 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (2997 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3028 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (3012 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3133 ms) 2026-03-10T13:41:02.821 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:02 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]: dispatch 2026-03-10T13:41:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:02 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:02 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: pgmap v195: 356 pgs: 11 creating+peering, 53 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]': finished 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: pgmap v195: 356 pgs: 11 creating+peering, 53 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]': finished 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: pgmap v195: 356 pgs: 11 creating+peering, 53 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 703 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-17", "mode": "writeback"}]': finished 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: osdmap e164: 8 total, 8 up, 8 in 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4266690969' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3358602922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.50272 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.50272 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:05.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T13:41:05.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:04 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3358602922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.50272 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.50272 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.50263 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm05-91018-29"}]': finished 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: osdmap e165: 8 total, 8 up, 8 in 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3358602922' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.50272 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-17"}]': finished 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.50272 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm05-91051-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm05-91018-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: osdmap e166: 8 total, 8 up, 8 in 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:04 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:05 vm09 ceph-mon[53367]: pgmap v198: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T13:41:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:05 vm09 ceph-mon[53367]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[58955]: pgmap v198: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[58955]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[51512]: pgmap v198: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:05 vm05 ceph-mon[51512]: osdmap e167: 8 total, 8 up, 8 in 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3418183330' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:06 vm09 ceph-mon[53367]: from='client.50284 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3418183330' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[58955]: from='client.50284 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm05-91018-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: osdmap e168: 8 total, 8 up, 8 in 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3418183330' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:06 vm05 ceph-mon[51512]: from='client.50284 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: pgmap v201: 292 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: from='client.50284 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T13:41:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: pgmap v201: 292 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: from='client.50284 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: pgmap v201: 292 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: from='client.50284 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm05-91051-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: osdmap e169: 8 total, 8 up, 8 in 2026-03-10T13:41:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:08.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:08 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: osdmap e170: 8 total, 8 up, 8 in 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:08 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: pgmap v204: 364 pgs: 72 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/122290090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:09 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: pgmap v204: 364 pgs: 72 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/122290090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: pgmap v204: 364 pgs: 72 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: osdmap e171: 8 total, 8 up, 8 in 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/122290090' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.50290 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3438687326' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:09 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]: dispatch 2026-03-10T13:41:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:10 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:10 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:10 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: pgmap v207: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]': finished 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:11 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: pgmap v207: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]': finished 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: pgmap v207: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-19", "mode": "writeback"}]': finished 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.50290 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm05-91051-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49796 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm05-91018-30"}]': finished 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: osdmap e172: 8 total, 8 up, 8 in 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm05-91018-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: osdmap e173: 8 total, 8 up, 8 in 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:11 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[51512]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:12 vm05 ceph-mon[58955]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T13:41:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:12 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-19"}]': finished 2026-03-10T13:41:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:12 vm09 ceph-mon[53367]: osdmap e174: 8 total, 8 up, 8 in 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: pgmap v210: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[58955]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: pgmap v210: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:13 vm05 ceph-mon[51512]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: pgmap v210: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 712 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm05-91018-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm05-91051-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:13 vm09 ceph-mon[53367]: osdmap e175: 8 total, 8 up, 8 in 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{ 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "prefix": "osd pool set", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "var": "eio", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "val": "true" 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: }]: dispatch 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{ 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "prefix": "osd pool set", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "var": "eio", 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: "val": "true" 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: }]': finished 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T13:41:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{ 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "prefix": "osd pool set", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "var": "eio", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "val": "true" 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: }]: dispatch 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{ 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "prefix": "osd pool set", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "var": "eio", 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: "val": "true" 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: }]': finished 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T13:41:15.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd=[{ 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "prefix": "osd pool set", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "var": "eio", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "val": "true" 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: }]: dispatch 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3237531342' entity='client.admin' cmd='[{ 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "prefix": "osd pool set", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "pool": "PoolEIOFlag_vm05-91051-33", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "var": "eio", 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: "val": "true" 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: }]': finished 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: osdmap e176: 8 total, 8 up, 8 in 2026-03-10T13:41:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[58955]: pgmap v213: 332 pgs: 8 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[58955]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[51512]: pgmap v213: 332 pgs: 8 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:15 vm05 ceph-mon[51512]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T13:41:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:15 vm09 ceph-mon[53367]: pgmap v213: 332 pgs: 8 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:15 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:15 vm09 ceph-mon[53367]: osdmap e177: 8 total, 8 up, 8 in 2026-03-10T13:41:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: pgmap v216: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/624019654' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[58955]: from='client.49820 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: pgmap v216: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/624019654' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:16 vm05 ceph-mon[51512]: from='client.49820 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: pgmap v216: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: osdmap e178: 8 total, 8 up, 8 in 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2680506101' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/624019654' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:16 vm09 ceph-mon[53367]: from='client.49820 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:18.315 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:18.315 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.49820 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.316 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:17 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.49820 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.49811 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm05-91018-31"}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.49820 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm05-91051-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: osdmap e179: 8 total, 8 up, 8 in 2026-03-10T13:41:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:17 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:18.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: pgmap v219: 356 pgs: 64 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]': finished 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: pgmap v219: 356 pgs: 64 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]': finished 2026-03-10T13:41:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T13:41:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:18 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: pgmap v219: 356 pgs: 64 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-21", "mode": "writeback"}]': finished 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm05-91018-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: osdmap e180: 8 total, 8 up, 8 in 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:18 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4096829800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[58955]: from='client.50317 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4096829800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:19 vm05 ceph-mon[51512]: from='client.50317 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:20.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: osdmap e181: 8 total, 8 up, 8 in 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]: dispatch 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4096829800' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:19 vm09 ceph-mon[53367]: from='client.50317 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: pgmap v222: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: from='client.50317 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[58955]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: pgmap v222: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: from='client.50317 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:20 vm05 ceph-mon[51512]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: pgmap v222: 356 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm05-91018-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-21"}]': finished 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: from='client.50317 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm05-91051-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:20 vm09 ceph-mon[53367]: osdmap e182: 8 total, 8 up, 8 in 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[58955]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:22 vm05 ceph-mon[51512]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T13:41:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:22 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:22 vm09 ceph-mon[53367]: osdmap e183: 8 total, 8 up, 8 in 2026-03-10T13:41:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: pgmap v225: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T13:41:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3183942452' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[58955]: from='client.49832 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: pgmap v225: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3183942452' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:23 vm05 ceph-mon[51512]: from='client.49832 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: pgmap v225: 300 pgs: 8 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 713 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: osdmap e184: 8 total, 8 up, 8 in 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3183942452' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:23 vm09 ceph-mon[53367]: from='client.49832 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: pgmap v227: 356 pgs: 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.49832 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: pgmap v227: 356 pgs: 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.49832 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: pgmap v227: 356 pgs: 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.49832 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/355546863' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: osdmap e185: 8 total, 8 up, 8 in 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]: dispatch 2026-03-10T13:41:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3040392316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.50329 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.50329 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3040392316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.50329 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.50329 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.49823 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm05-91018-32"}]': finished 2026-03-10T13:41:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: osdmap e186: 8 total, 8 up, 8 in 2026-03-10T13:41:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3040392316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.50329 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.50329 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm05-91018-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: osdmap e187: 8 total, 8 up, 8 in 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:27 vm09 ceph-mon[53367]: pgmap v230: 388 pgs: 32 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:27 vm05 ceph-mon[58955]: pgmap v230: 388 pgs: 32 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:27 vm05 ceph-mon[51512]: pgmap v230: 388 pgs: 32 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 250 B/s wr, 1 op/s 2026-03-10T13:41:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:28.584 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]: dispatch 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:28.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:28 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]: dispatch 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: osdmap e188: 8 total, 8 up, 8 in 2026-03-10T13:41:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]: dispatch 2026-03-10T13:41:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:28 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: pgmap v233: 420 pgs: 64 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]': finished 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: pgmap v233: 420 pgs: 64 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]': finished 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T13:41:30.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: pgmap v233: 420 pgs: 64 unknown, 64 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 459 KiB data, 714 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm05-91018-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-23", "mode": "writeback"}]': finished 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/207927640' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm05-91051-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: osdmap e189: 8 total, 8 up, 8 in 2026-03-10T13:41:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[58955]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T13:41:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[51512]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T13:41:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:30 vm09 ceph-mon[53367]: osdmap e190: 8 total, 8 up, 8 in 2026-03-10T13:41:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: pgmap v236: 396 pgs: 3 unknown, 5 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: pgmap v236: 396 pgs: 3 unknown, 5 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: pgmap v236: 396 pgs: 3 unknown, 5 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: osdmap e191: 8 total, 8 up, 8 in 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]: dispatch 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-23"}]': finished 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: osdmap e192: 8 total, 8 up, 8 in 2026-03-10T13:41:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]: dispatch 2026-03-10T13:41:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: pgmap v239: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T13:41:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T13:41:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: pgmap v239: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: pgmap v239: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 718 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.7 KiB/s wr, 4 op/s 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2339877575' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm05-91018-33"}]': finished 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: osdmap e193: 8 total, 8 up, 8 in 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:34 vm09 ceph-mon[53367]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[58955]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3874256965' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm05-91018-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: osdmap e194: 8 total, 8 up, 8 in 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:34 vm05 ceph-mon[51512]: osdmap e195: 8 total, 8 up, 8 in 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: pgmap v242: 356 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4011037029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: from='client.50350 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: from='client.50350 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[58955]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: pgmap v242: 356 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4011037029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: from='client.50350 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: from='client.50350 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:35 vm05 ceph-mon[51512]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: pgmap v242: 356 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4011037029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: from='client.50350 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm05-91018-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: from='client.50350 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:35 vm09 ceph-mon[53367]: osdmap e196: 8 total, 8 up, 8 in 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3787651265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[58955]: from='client.50356 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3787651265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:36 vm05 ceph-mon[51512]: from='client.50356 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: osdmap e197: 8 total, 8 up, 8 in 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3787651265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:36 vm09 ceph-mon[53367]: from='client.50356 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: pgmap v245: 396 pgs: 40 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.50356 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: pgmap v245: 396 pgs: 40 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.50356 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:37.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:37 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: pgmap v245: 396 pgs: 40 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.50356 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: osdmap e198: 8 total, 8 up, 8 in 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]: dispatch 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:37.705 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:37 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]': finished 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3853819016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.50362 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]': finished 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3853819016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.50362 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:38.583 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:41:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-25", "mode": "writeback"}]': finished 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4122465078' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: osdmap e199: 8 total, 8 up, 8 in 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3853819016' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.50362 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:41:38.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: pgmap v248: 420 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.50362 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: pgmap v248: 420 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.50362 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:39 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: pgmap v248: 420 pgs: 64 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 459 KiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:41:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.50344 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm05-91018-34"}]': finished 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.50362 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: osdmap e200: 8 total, 8 up, 8 in 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:39 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2005004971' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:40 vm09 ceph-mon[53367]: from='client.50374 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2005004971' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[58955]: from='client.50374 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-25"}]': finished 2026-03-10T13:41:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm05-91018-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: osdmap e201: 8 total, 8 up, 8 in 2026-03-10T13:41:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2005004971' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:40 vm05 ceph-mon[51512]: from='client.50374 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:41 vm09 ceph-mon[53367]: pgmap v251: 452 pgs: 19 creating+peering, 13 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 406 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:41 vm09 ceph-mon[53367]: from='client.50374 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:41 vm09 ceph-mon[53367]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[58955]: pgmap v251: 452 pgs: 19 creating+peering, 13 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 406 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[58955]: from='client.50374 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[58955]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[51512]: pgmap v251: 452 pgs: 19 creating+peering, 13 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 406 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[51512]: from='client.50374 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm05-91051-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:41 vm05 ceph-mon[51512]: osdmap e202: 8 total, 8 up, 8 in 2026-03-10T13:41:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:42 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:42 vm09 ceph-mon[53367]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T13:41:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[58955]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm05-91018-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[51512]: osdmap e203: 8 total, 8 up, 8 in 2026-03-10T13:41:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:43 vm09 ceph-mon[53367]: pgmap v254: 452 pgs: 19 creating+peering, 45 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:43 vm09 ceph-mon[53367]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[58955]: pgmap v254: 452 pgs: 19 creating+peering, 45 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[58955]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[51512]: pgmap v254: 452 pgs: 19 creating+peering, 45 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 374 active+clean; 459 KiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:43 vm05 ceph-mon[51512]: osdmap e204: 8 total, 8 up, 8 in 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[58955]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[51512]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:44 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:44 vm09 ceph-mon[53367]: osdmap e205: 8 total, 8 up, 8 in 2026-03-10T13:41:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:44 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: pgmap v257: 428 pgs: 14 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 400 active+clean; 459 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: pgmap v257: 428 pgs: 14 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 400 active+clean; 459 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: pgmap v257: 428 pgs: 14 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 400 active+clean; 459 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: osdmap e206: 8 total, 8 up, 8 in 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/927666222' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]: dispatch 2026-03-10T13:41:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: pgmap v260: 356 pgs: 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: pgmap v260: 356 pgs: 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: pgmap v260: 356 pgs: 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 331 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:41:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.50368 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm05-91018-35"}]': finished 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: osdmap e207: 8 total, 8 up, 8 in 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:48.606 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]': finished 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3490488445' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.49877 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.49877 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T13:41:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]': finished 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3490488445' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.49877 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.49877 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-27", "mode": "writeback"}]': finished 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm05-91018-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: osdmap e208: 8 total, 8 up, 8 in 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3490488445' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.49877 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:49.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.49877 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm05-91051-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:49.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: osdmap e209: 8 total, 8 up, 8 in 2026-03-10T13:41:49.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:41:50.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:49 vm09 ceph-mon[53367]: pgmap v263: 356 pgs: 32 unknown, 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 299 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:50.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:50.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[58955]: pgmap v263: 356 pgs: 32 unknown, 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 299 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[51512]: pgmap v263: 356 pgs: 32 unknown, 11 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 299 active+clean; 458 KiB data, 732 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T13:41:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:51.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm05-91018-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: osdmap e210: 8 total, 8 up, 8 in 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]: dispatch 2026-03-10T13:41:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:52.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: pgmap v266: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:52.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[58955]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: pgmap v266: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:52.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:52 vm05 ceph-mon[51512]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T13:41:52.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: pgmap v266: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:41:52.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-27"}]': finished 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: osdmap e211: 8 total, 8 up, 8 in 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2330510287' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm05-91051-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:52.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:52 vm09 ceph-mon[53367]: osdmap e212: 8 total, 8 up, 8 in 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: pgmap v268: 364 pgs: 40 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:53.556 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.557 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.561 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:53.561 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:53 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: pgmap v268: 364 pgs: 40 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:53.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: pgmap v268: 364 pgs: 40 unknown, 6 active+clean+snaptrim, 13 active+clean+snaptrim_wait, 305 active+clean; 458 KiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: osdmap e213: 8 total, 8 up, 8 in 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3683482623' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:53.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:53 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:55.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: pgmap v271: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 716 B/s rd, 0 op/s 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.045 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: pgmap v271: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 716 B/s rd, 0 op/s 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.046 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.077 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: pgmap v271: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 716 B/s rd, 0 op/s 2026-03-10T13:41:56.077 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.49874 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm05-91018-36"}]': finished 2026-03-10T13:41:56.077 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:41:56.077 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: osdmap e214: 8 total, 8 up, 8 in 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm05-91018-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: osdmap e215: 8 total, 8 up, 8 in 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:41:56.078 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:57 vm09 ceph-mon[53367]: pgmap v274: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:41:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:57 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:41:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:57 vm09 ceph-mon[53367]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T13:41:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[58955]: pgmap v274: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[58955]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[51512]: pgmap v274: 324 pgs: 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T13:41:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:41:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[51512]: osdmap e216: 8 total, 8 up, 8 in 2026-03-10T13:41:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:41:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:58.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:41:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[58955]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[51512]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:41:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:58 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm05-91018-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:41:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:41:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:58 vm09 ceph-mon[53367]: osdmap e217: 8 total, 8 up, 8 in 2026-03-10T13:41:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:41:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:59.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: pgmap v277: 340 pgs: 16 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: pgmap v277: 340 pgs: 16 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:41:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:41:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: pgmap v277: 340 pgs: 16 unknown, 32 creating+peering, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 458 KiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:42:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: osdmap e218: 8 total, 8 up, 8 in 2026-03-10T13:42:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]: dispatch 2026-03-10T13:42:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:41:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:41:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:41:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]': finished 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T13:42:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:00 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-29", "mode": "writeback"}]': finished 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3784324174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: osdmap e219: 8 total, 8 up, 8 in 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50392 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-46"}]': finished 2026-03-10T13:42:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: osdmap e220: 8 total, 8 up, 8 in 2026-03-10T13:42:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2328944842' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:00 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]: dispatch 2026-03-10T13:42:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: pgmap v280: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: pgmap v280: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: pgmap v280: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50395 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm05-91018-37"}]': finished 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm05-91051-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: osdmap e221: 8 total, 8 up, 8 in 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[58955]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[51512]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T13:42:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm05-91018-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:02 vm09 ceph-mon[53367]: osdmap e222: 8 total, 8 up, 8 in 2026-03-10T13:42:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:03 vm09 ceph-mon[53367]: pgmap v283: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:03 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:03 vm09 ceph-mon[53367]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T13:42:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[58955]: pgmap v283: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[58955]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[51512]: pgmap v283: 324 pgs: 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 310 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm05-91051-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[51512]: osdmap e223: 8 total, 8 up, 8 in 2026-03-10T13:42:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:42:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm05-91018-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: osdmap e224: 8 total, 8 up, 8 in 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]: dispatch 2026-03-10T13:42:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: pgmap v286: 332 pgs: 3 creating+peering, 5 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T13:42:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: pgmap v286: 332 pgs: 3 creating+peering, 5 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: pgmap v286: 332 pgs: 3 creating+peering, 5 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-29"}]': finished 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: osdmap e225: 8 total, 8 up, 8 in 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: pgmap v289: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: pgmap v289: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: pgmap v289: 332 pgs: 8 unknown, 6 active+clean+snaptrim, 22 active+clean+snaptrim_wait, 296 active+clean; 8.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail; 14 KiB/s rd, 0 B/s wr, 26 op/s 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: osdmap e226: 8 total, 8 up, 8 in 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1646654113' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]: dispatch 2026-03-10T13:42:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:08.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.50401 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm05-91051-47"}]': finished 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: osdmap e227: 8 total, 8 up, 8 in 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: pgmap v292: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T13:42:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: pgmap v292: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:10.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: pgmap v292: 324 pgs: 32 unknown, 6 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 278 active+clean; 4.4 MiB data, 767 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2818750445' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm05-91018-38"}]': finished 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm05-91051-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: osdmap e228: 8 total, 8 up, 8 in 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[58955]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T13:42:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:10 vm05 ceph-mon[51512]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm05-91018-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: osdmap e229: 8 total, 8 up, 8 in 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm05-91051-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:10 vm09 ceph-mon[53367]: osdmap e230: 8 total, 8 up, 8 in 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: pgmap v295: 324 pgs: 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]: dispatch 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]': finished 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[58955]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: pgmap v295: 324 pgs: 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]: dispatch 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]': finished 2026-03-10T13:42:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:11 vm05 ceph-mon[51512]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: pgmap v295: 324 pgs: 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]: dispatch 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm05-91018-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-31", "mode": "writeback"}]': finished 2026-03-10T13:42:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:11 vm09 ceph-mon[53367]: osdmap e231: 8 total, 8 up, 8 in 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:13.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:12 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: osdmap e232: 8 total, 8 up, 8 in 2026-03-10T13:42:13.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]: dispatch 2026-03-10T13:42:13.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:13.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:12 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: pgmap v298: 340 pgs: 16 unknown, 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T13:42:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: pgmap v298: 340 pgs: 16 unknown, 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:13 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: pgmap v298: 340 pgs: 16 unknown, 17 creating+peering, 5 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 296 active+clean; 4.4 MiB data, 760 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T13:42:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-31"}]': finished 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: osdmap e233: 8 total, 8 up, 8 in 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1899851801' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]: dispatch 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:13 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.075 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:15.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.50410 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm05-91051-48"}]': finished 2026-03-10T13:42:15.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:15.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: osdmap e234: 8 total, 8 up, 8 in 2026-03-10T13:42:15.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2934382401' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]: dispatch 2026-03-10T13:42:15.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: pgmap v301: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: pgmap v301: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: pgmap v301: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:42:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.50413 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm05-91018-39"}]': finished 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm05-91051-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: osdmap e235: 8 total, 8 up, 8 in 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm05-91018-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: osdmap e236: 8 total, 8 up, 8 in 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:17.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: pgmap v304: 324 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: pgmap v304: 324 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: pgmap v304: 324 pgs: 32 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm05-91051-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: osdmap e237: 8 total, 8 up, 8 in 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:18.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: pgmap v307: 332 pgs: 40 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]: dispatch 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: pgmap v307: 332 pgs: 40 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]: dispatch 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:20.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: pgmap v307: 332 pgs: 40 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 282 active+clean; 4.4 MiB data, 764 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm05-91018-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: osdmap e238: 8 total, 8 up, 8 in 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]: dispatch 2026-03-10T13:42:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]': finished 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]': finished 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:20 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-33", "mode": "writeback"}]': finished 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: osdmap e239: 8 total, 8 up, 8 in 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: osdmap e240: 8 total, 8 up, 8 in 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3938569777' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:20 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: pgmap v310: 332 pgs: 8 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: pgmap v310: 332 pgs: 8 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:21 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: pgmap v310: 332 pgs: 8 unknown, 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-33"}]': finished 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.49907 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm05-91051-49"}]': finished 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: osdmap e241: 8 total, 8 up, 8 in 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2729275402' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:21 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T13:42:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:23.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.50422 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm05-91018-40"}]': finished 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm05-91051-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: osdmap e242: 8 total, 8 up, 8 in 2026-03-10T13:42:23.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: pgmap v313: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: pgmap v313: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: pgmap v313: 324 pgs: 4 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 314 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm05-91018-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: osdmap e243: 8 total, 8 up, 8 in 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: pgmap v316: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: pgmap v316: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: pgmap v316: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm05-91051-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: osdmap e244: 8 total, 8 up, 8 in 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:27.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T13:42:27.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]: dispatch 2026-03-10T13:42:27.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:27.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:27.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:27.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T13:42:27.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm05-91018-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: osdmap e245: 8 total, 8 up, 8 in 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: osdmap e246: 8 total, 8 up, 8 in 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]: dispatch 2026-03-10T13:42:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: pgmap v319: 340 pgs: 48 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]': finished 2026-03-10T13:42:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T13:42:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: pgmap v319: 340 pgs: 48 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]': finished 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: pgmap v319: 340 pgs: 48 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-35", "mode": "writeback"}]': finished 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: osdmap e247: 8 total, 8 up, 8 in 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]: dispatch 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:28.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:28.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: pgmap v322: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:29.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:29 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: pgmap v322: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: pgmap v322: 324 pgs: 32 unknown, 3 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 286 active+clean; 4.4 MiB data, 765 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2971989517' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm05-91051-50"}]': finished 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1659137731' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: osdmap e248: 8 total, 8 up, 8 in 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:29 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1964061180' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.50437 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1964061180' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.50437 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.50431 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm05-91018-41"}]': finished 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: osdmap e249: 8 total, 8 up, 8 in 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1964061180' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.50437 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: pgmap v325: 356 pgs: 32 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.50437 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: pgmap v325: 356 pgs: 32 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.50437 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:31 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: pgmap v325: 356 pgs: 32 unknown, 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.50437 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm05-91051-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm05-91018-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: osdmap e250: 8 total, 8 up, 8 in 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: osdmap e251: 8 total, 8 up, 8 in 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:31 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:32 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm05-91018-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: osdmap e252: 8 total, 8 up, 8 in 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:32.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:32 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: pgmap v328: 324 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: pgmap v328: 324 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T13:42:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: pgmap v328: 324 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 319 active+clean; 4.4 MiB data, 750 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: osdmap e253: 8 total, 8 up, 8 in 2026-03-10T13:42:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]: dispatch 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:34 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm05-91051-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-35"}]': finished 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: osdmap e254: 8 total, 8 up, 8 in 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:34 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: pgmap v331: 332 pgs: 1 creating+activating, 1 active+clean+snaptrim, 330 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: pgmap v331: 332 pgs: 1 creating+activating, 1 active+clean+snaptrim, 330 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:42:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T13:42:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:35 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: pgmap v331: 332 pgs: 1 creating+activating, 1 active+clean+snaptrim, 330 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2998750852' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: osdmap e255: 8 total, 8 up, 8 in 2026-03-10T13:42:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:35 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]: dispatch 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7067 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (7039 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7398 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (7076 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (6983 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7363 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (6833 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (7154 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (7107 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (8485 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (6257 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (7409 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (7260 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7025 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (6372 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (6967 ms) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (113795 ms total) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [----------] Global test environment tear-down 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (197834 ms total) 2026-03-10T13:42:36.146 INFO:tasks.workunit.client.0.vm05.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:36 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: from='client.49922 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm05-91018-42"}]': finished 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: osdmap e256: 8 total, 8 up, 8 in 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:36.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:36 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: pgmap v334: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: pgmap v334: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: pgmap v334: 300 pgs: 8 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/579333806' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: osdmap e257: 8 total, 8 up, 8 in 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]: dispatch 2026-03-10T13:42:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:38 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.49928 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm05-91051-52"}]': finished 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: osdmap e258: 8 total, 8 up, 8 in 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:38 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:38.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: pgmap v337: 324 pgs: 32 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[58955]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T13:42:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: pgmap v337: 324 pgs: 32 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:39 vm05 ceph-mon[51512]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: pgmap v337: 324 pgs: 32 unknown, 1 active+clean+snaptrim, 291 active+clean; 4.4 MiB data, 734 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm05-91051-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:39 vm09 ceph-mon[53367]: osdmap e259: 8 total, 8 up, 8 in 2026-03-10T13:42:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]': finished 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[58955]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]': finished 2026-03-10T13:42:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:40 vm05 ceph-mon[51512]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]: dispatch 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-37", "mode": "writeback"}]': finished 2026-03-10T13:42:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:40 vm09 ceph-mon[53367]: osdmap e260: 8 total, 8 up, 8 in 2026-03-10T13:42:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: pgmap v340: 324 pgs: 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: pgmap v340: 324 pgs: 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:41 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: pgmap v340: 324 pgs: 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm05-91051-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: osdmap e261: 8 total, 8 up, 8 in 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:41 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:42 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-37"}]': finished 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: osdmap e262: 8 total, 8 up, 8 in 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:42 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: pgmap v343: 332 pgs: 8 unknown, 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: pgmap v343: 332 pgs: 8 unknown, 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:43 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: pgmap v343: 332 pgs: 8 unknown, 324 active+clean; 4.4 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pg_num","val":"11"}]': finished 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: osdmap e263: 8 total, 8 up, 8 in 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:43 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm05-91156-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: osdmap e264: 8 total, 8 up, 8 in 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3604185345' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:45.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:45 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:45.983 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: pgmap v346: 292 pgs: 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.50455 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm05-91051-53"}]': finished 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: osdmap e265: 8 total, 8 up, 8 in 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:45.984 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:45 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: pgmap v349: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: pgmap v349: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:47.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:47 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: pgmap v349: 324 pgs: 32 unknown, 292 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 10 op/s 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm05-91051-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.24674 ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm05-91156-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1512736556' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: osdmap e266: 8 total, 8 up, 8 in 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:47 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:48 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: osdmap e267: 8 total, 8 up, 8 in 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm05-91051-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-39", "mode": "writeback"}]': finished 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm05-91156-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: osdmap e268: 8 total, 8 up, 8 in 2026-03-10T13:42:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:48 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:48.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: pgmap v352: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: pgmap v352: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:49.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:49.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:49.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T13:42:49.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: pgmap v352: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 740 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: osdmap e269: 8 total, 8 up, 8 in 2026-03-10T13:42:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]: dispatch 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T13:42:50.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:50.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:50 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm05-91156-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-39"}]': finished 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: osdmap e270: 8 total, 8 up, 8 in 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:50 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:51.360 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: seed 91276 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (418 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (11285 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (9 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (11712 ms total) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (7423 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (8002 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (10165 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [3] 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [4,3] 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting some heads 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (46978 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (10047 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (8391 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (8083 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8042 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (10144 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (9166 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (10926 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (14157 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (8421 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (8033 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (12377 ms) 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-10T13:42:51.361 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (7974 ms) 2026-03-10T13:42:51.362 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-10T13:42:51.362 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8201 ms) 2026-03-10T13:42:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:51 vm09 ceph-mon[53367]: pgmap v355: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 137 op/s 2026-03-10T13:42:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:51 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:51 vm05 ceph-mon[51512]: pgmap v355: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 137 op/s 2026-03-10T13:42:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:51 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:51 vm05 ceph-mon[58955]: pgmap v355: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 137 op/s 2026-03-10T13:42:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:51 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSexpected=11 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:863748b0:::15:head expected=13:863748b0:::15:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=15 expected=15 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:52ea6a34:::10:head expected=13:52ea6a34:::10:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=10 expected=10 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:566253c9:::13:head expected=13:566253c9:::13:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=13 expected=13 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:6cac518f:::0:head expected=13:6cac518f:::0:head 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=0 expected=0 2026-03-10T13:42:52.792 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:62a1935d:::14:head expected=13:62a1935d:::14:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=14 expected=14 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:5c6b0b28:::7:head expected=13:5c6b0b28:::7:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=7 expected=7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:f905c69b:::2:head expected=13:f905c69b:::2:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=2 expected=2 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : cursor()=13:02547ec2:::1:head expected=13:02547ec2:::1:head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: : entry=1 expected=1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (190 ms) 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (201408 ms) 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 0/5 -> MIN 13:33333333::::head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 1/5 -> 13:33333333::::head 13:66666666::::head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 2/5 -> 13:66666666::::head 13:99999999::::head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 3/5 -> 13:99999999::::head 13:cccccccc::::head 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: split 4/5 -> 13:cccccccc::::head MAX 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (4013 ms) 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 7 tests from LibRadosList (206153 ms total) 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (2415 ms) 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo2 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo3 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo4 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo5 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset foo6,foo7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: foo6 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo4 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo5 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo7 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns2:foo6 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: ns1:foo1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo1 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo2 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: :foo3 2026-03-10T13:42:52.793 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (207 ms) 2026-03-10T13:42:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:52 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:52 vm09 ceph-mon[53367]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T13:42:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:52 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[58955]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T13:42:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:53.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[51512]: osdmap e271: 8 total, 8 up, 8 in 2026-03-10T13:42:53.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1909794501' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:53.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:52 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: pgmap v358: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:53 vm09 ceph-mon[53367]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: pgmap v358: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[58955]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: pgmap v358: 268 pgs: 8 unknown, 260 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.50461 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm05-91051-54"}]': finished 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: osdmap e272: 8 total, 8 up, 8 in 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm05-91051-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:42:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:53 vm05 ceph-mon[51512]: osdmap e273: 8 total, 8 up, 8 in 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:55.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:54 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:42:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:54 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:55.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:54 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: pgmap v361: 292 pgs: 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: pgmap v361: 292 pgs: 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T13:42:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: pgmap v361: 292 pgs: 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2131590791' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: osdmap e274: 8 total, 8 up, 8 in 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]: dispatch 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm05-91051-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.50467 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm05-91156-2"}]': finished 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: osdmap e275: 8 total, 8 up, 8 in 2026-03-10T13:42:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[58955]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:42:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:56 vm05 ceph-mon[51512]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T13:42:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:56 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:42:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:42:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:56 vm09 ceph-mon[53367]: osdmap e276: 8 total, 8 up, 8 in 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: pgmap v364: 300 pgs: 8 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: pgmap v364: 300 pgs: 8 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:57 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: pgmap v364: 300 pgs: 8 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91156-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: osdmap e277: 8 total, 8 up, 8 in 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:57 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:58.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:42:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsStart 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 1 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 10 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 13 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 7 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 14 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 0 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 15 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 11 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 5 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 8 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 6 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 3 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 4 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 12 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 9 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2 0 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (235 ms) 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 3 tests from LibRadosListEC (2857 ms total) 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (3032 ms) 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] 1 test from LibRadosListNP (3032 ms total) 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [----------] Global test environment tear-down 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [==========] 11 tests from 3 test suites ran. (220320 ms total) 2026-03-10T13:42:58.750 INFO:tasks.workunit.client.0.vm05.stdout: api_list: [ PASSED ] 11 tests. 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T13:42:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T13:42:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:58 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3808045656' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm05-91156-3","pool2":"test-rados-api-vm05-91156-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3637690073' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: osdmap e278: 8 total, 8 up, 8 in 2026-03-10T13:42:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:58 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]: dispatch 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: pgmap v367: 324 pgs: 32 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]: dispatch 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]': finished 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T13:43:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: pgmap v367: 324 pgs: 32 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]': finished 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:42:59 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:42:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:42:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: pgmap v367: 324 pgs: 32 unknown, 2 active, 6 creating+activating, 17 creating+peering, 267 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]: dispatch 2026-03-10T13:43:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.50473 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm05-91051-55"}]': finished 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-41"}]': finished 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: osdmap e279: 8 total, 8 up, 8 in 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:42:59 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:00 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:00 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:00 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: pgmap v370: 292 pgs: 292 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: pgmap v370: 292 pgs: 292 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T13:43:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T13:43:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: pgmap v370: 292 pgs: 292 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm05-91051-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: osdmap e280: 8 total, 8 up, 8 in 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: osdmap e281: 8 total, 8 up, 8 in 2026-03-10T13:43:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: pgmap v373: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: pgmap v373: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:03 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: pgmap v373: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 737 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm05-91051-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: osdmap e282: 8 total, 8 up, 8 in 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:03 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[58955]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[51512]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T13:43:05.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T13:43:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:04 vm09 ceph-mon[53367]: osdmap e283: 8 total, 8 up, 8 in 2026-03-10T13:43:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: pgmap v376: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:05 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: pgmap v376: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: pgmap v376: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: osdmap e284: 8 total, 8 up, 8 in 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:05 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:07.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T13:43:07.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T13:43:07.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:06 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout:tRead 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: ok, hit_set contains 265:602f83fe:::foo:head 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9461 ms) 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg_num = 32 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 0 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 1 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 2 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 3 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 4 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 5 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 6 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 7 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 8 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 9 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 10 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 11 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 12 ls 1773150187,0 2026-03-10T13:43:07.178 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 13 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 14 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 15 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 16 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 17 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 18 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 19 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 20 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 21 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 22 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 23 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 24 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 25 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 26 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 27 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 28 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 29 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 30 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg 31 ls 1773150187,0 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: pg_num = 32 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6cac518f:::0:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:02547ec2:::1:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f905c69b:::2:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cfc208b3:::3:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d83876eb:::4:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b29083e3:::5:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c4fdafeb:::6:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5c6b0b28:::7:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bd63b0f1:::8:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e960b815:::9:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:52ea6a34:::10:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:89d3ae78:::11:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:de5d7c5f:::12:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:566253c9:::13:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:62a1935d:::14:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:863748b0:::15:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3958e169:::16:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4d4dabf9:::17:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8391935d:::18:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28883081:::19:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:69259c59:::20:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4bdb80b7:::21:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a11c5d71:::22:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:271af37b:::23:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:95b121be:::24:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:58d1031b:::25:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0a050783:::26:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c709704c:::27:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cbe56eaf:::28:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:86b4b162:::29:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:70d89383:::30:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dd450c7c:::31:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6d5729b1:::32:head 2026-03-10T13:43:07.179 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c388f3fb:::33:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:56cfea31:::34:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9dbc1bf7:::35:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:40b74ccd:::36:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4d5aaf42:::37:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:920f362c:::38:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6cc53222:::39:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9cad833f:::40:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1ea84d41:::41:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c4480ef6:::42:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a694361e:::43:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d1bd33e9:::44:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ddc2cd5d:::45:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:2b782207:::46:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7b187fca:::47:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:90ecdf6f:::48:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a5ed95fe:::49:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ea0eaa55:::50:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f33ef17b:::51:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a0d1b2f6:::52:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:60c5229e:::53:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:edcbc575:::54:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:102cf253:::55:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:efb7fb0b:::56:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:50d0a326:::57:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d4dc5daf:::58:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3a130462:::59:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ec87ed71:::60:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d5bc9454:::61:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3ddfe313:::62:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c2816b9:::63:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:47e00e4d:::64:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c6410c18:::65:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b48ed237:::66:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cd63ad31:::67:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b179e92b:::68:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0d9f741a:::69:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6d3352ae:::70:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c6d5c19e:::71:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bc4729c3:::72:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:77e930b9:::73:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0abeecfd:::74:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b7c37e15:::75:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b6378398:::76:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:02bd68de:::77:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:cc795d2d:::78:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:630d4fea:::79:head 2026-03-10T13:43:07.180 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0d29ef5:::80:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fd6f13d2:::81:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:606461d5:::82:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:eadbdc43:::83:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8761d0bb:::84:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9ef0186f:::85:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0d41294:::86:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:961de695:::87:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1423148f:::88:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:633a8fa2:::89:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a8653809:::90:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3dac8b33:::91:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:35aad435:::92:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f6dcc343:::93:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dbbdad87:::94:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1cb48ce0:::95:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:03cd461c:::96:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:17a4ea99:::97:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9993c9a7:::98:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6394211c:::99:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:94c7ae57:::100:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6fdee5bb:::101:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9a477fd1:::102:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:eb850916:::103:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:affc56b9:::104:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b42dc814:::105:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f319f8f0:::106:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9a40b9de:::107:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:8b524f28:::108:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e3de589f:::109:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:90f90a5b:::110:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a7b4f1d7:::111:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:af51766e:::112:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b6f90bd1:::113:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e0261208:::114:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c9569ef7:::115:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:61bebe50:::116:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fe93412b:::117:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d3d38bee:::118:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3100ba0c:::119:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d0560ada:::120:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f0ea8b35:::121:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:766f231a:::122:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a07a2582:::123:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bd7c6b3a:::124:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:fb2ddaff:::125:head 2026-03-10T13:43:07.181 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4408e1fe:::126:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ee1df7a7:::127:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c3002909:::128:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4f48ffa9:::129:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:edf38733:::130:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c08425c0:::131:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5f902d98:::132:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:41ea2c93:::133:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:813cee13:::134:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0131818d:::135:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:26ba5a85:::136:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:381b8a5a:::137:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28797e47:::138:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:bfca7f22:::139:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:36807075:::140:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:80b03975:::141:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5c15709b:::142:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f39ea15e:::143:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ea992956:::144:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:48887b1c:::145:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:9f24a9dd:::146:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:987f100b:::147:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d2dd3581:::148:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7fed1808:::149:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c80b70e9:::150:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:85ed90f9:::151:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:36428b24:::152:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d044c34a:::153:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c18bf58:::154:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d1c21232:::155:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a7a3c575:::156:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:87da0633:::157:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d5ac3822:::158:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3f20522d:::159:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6ca26563:::160:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:532ce135:::161:head 2026-03-10T13:43:07.182 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:c78863e6:::162:head 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T13:43:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: osdmap e285: 8 total, 8 up, 8 in 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/191771437' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T13:43:07.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:06 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]: dispatch 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: pgmap v379: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: pgmap v379: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.399 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: pgmap v379: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.50482 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm05-91051-56"}]': finished 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: osdmap e286: 8 total, 8 up, 8 in 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm05-91276-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]: dispatch 2026-03-10T13:43:08.400 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]: dispatch 2026-03-10T13:43:08.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]': finished 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: pgmap v382: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[58955]: osdmap e288: 8 total, 8 up, 8 in 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]': finished 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: pgmap v382: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:09 vm05 ceph-mon[51512]: osdmap e288: 8 total, 8 up, 8 in 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm05-91051-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-43"}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.8", "id": [6, 1]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.11", "id": [3, 1]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1b", "id": [3, 1]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1d", "id": [3, 7]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "268.c", "id": [4, 3]}]': finished 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: osdmap e287: 8 total, 8 up, 8 in 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: pgmap v382: 292 pgs: 292 active+clean; 8.3 MiB data, 742 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:08 vm09 ceph-mon[53367]: osdmap e288: 8 total, 8 up, 8 in 2026-03-10T13:43:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[58955]: osdmap e289: 8 total, 8 up, 8 in 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[58955]: pgmap v385: 268 pgs: 8 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[51512]: osdmap e289: 8 total, 8 up, 8 in 2026-03-10T13:43:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:10 vm05 ceph-mon[51512]: pgmap v385: 268 pgs: 8 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm05-91051-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:10 vm09 ceph-mon[53367]: osdmap e289: 8 total, 8 up, 8 in 2026-03-10T13:43:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:10 vm09 ceph-mon[53367]: pgmap v385: 268 pgs: 8 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[58955]: Health check failed: Reduced data availability: 3 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[58955]: osdmap e290: 8 total, 8 up, 8 in 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[51512]: Health check failed: Reduced data availability: 3 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[51512]: osdmap e290: 8 total, 8 up, 8 in 2026-03-10T13:43:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:12 vm09 ceph-mon[53367]: Health check failed: Reduced data availability: 3 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T13:43:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:12 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:12 vm09 ceph-mon[53367]: osdmap e290: 8 total, 8 up, 8 in 2026-03-10T13:43:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:13 vm09 ceph-mon[53367]: osdmap e291: 8 total, 8 up, 8 in 2026-03-10T13:43:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:13 vm09 ceph-mon[53367]: pgmap v388: 292 pgs: 32 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[58955]: osdmap e291: 8 total, 8 up, 8 in 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[58955]: pgmap v388: 292 pgs: 32 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:13.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[51512]: osdmap e291: 8 total, 8 up, 8 in 2026-03-10T13:43:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:13 vm05 ceph-mon[51512]: pgmap v388: 292 pgs: 32 unknown, 4 peering, 256 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T13:43:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:43:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:43:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T13:43:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:43:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: osdmap e292: 8 total, 8 up, 8 in 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]: dispatch 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3675832110' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm05-91051-57"}]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: osdmap e293: 8 total, 8 up, 8 in 2026-03-10T13:43:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:43:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: pgmap v390: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 3 pgs peering) 2026-03-10T13:43:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:15 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: pgmap v390: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 3 pgs peering) 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: pgmap v390: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 3 pgs peering) 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm05-91051-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: osdmap e294: 8 total, 8 up, 8 in 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:15.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:15 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:17 vm09 ceph-mon[53367]: pgmap v393: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:17 vm09 ceph-mon[53367]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T13:43:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:43:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:17 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[58955]: pgmap v393: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[58955]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T13:43:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:43:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[51512]: pgmap v393: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[51512]: osdmap e295: 8 total, 8 up, 8 in 2026-03-10T13:43:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:43:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:17 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:18.402 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:18 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:18.403 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:43:18.403 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:18 vm09 ceph-mon[53367]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[58955]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm05-91051-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:43:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:18 vm05 ceph-mon[51512]: osdmap e296: 8 total, 8 up, 8 in 2026-03-10T13:43:18.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:43:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:19 vm09 ceph-mon[53367]: pgmap v396: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:19 vm09 ceph-mon[53367]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T13:43:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[58955]: pgmap v396: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[58955]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[51512]: pgmap v396: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[51512]: osdmap e297: 8 total, 8 up, 8 in 2026-03-10T13:43:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[58955]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[58955]: osdmap e299: 8 total, 8 up, 8 in 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[51512]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:20 vm05 ceph-mon[51512]: osdmap e299: 8 total, 8 up, 8 in 2026-03-10T13:43:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:20 vm09 ceph-mon[53367]: osdmap e298: 8 total, 8 up, 8 in 2026-03-10T13:43:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:20 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:20 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:20 vm09 ceph-mon[53367]: osdmap e299: 8 total, 8 up, 8 in 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[58955]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[58955]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[58955]: osdmap e300: 8 total, 8 up, 8 in 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[51512]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[51512]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:21 vm05 ceph-mon[51512]: osdmap e300: 8 total, 8 up, 8 in 2026-03-10T13:43:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:21 vm09 ceph-mon[53367]: pgmap v399: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3169076944' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:21 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]: dispatch 2026-03-10T13:43:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:21 vm09 ceph-mon[53367]: from='client.49958 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm05-91051-58"}]': finished 2026-03-10T13:43:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:21 vm09 ceph-mon[53367]: osdmap e300: 8 total, 8 up, 8 in 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[58955]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[58955]: osdmap e301: 8 total, 8 up, 8 in 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:23.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[51512]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[51512]: osdmap e301: 8 total, 8 up, 8 in 2026-03-10T13:43:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:23 vm09 ceph-mon[53367]: pgmap v402: 292 pgs: 292 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:43:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:23 vm09 ceph-mon[53367]: osdmap e301: 8 total, 8 up, 8 in 2026-03-10T13:43:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[58955]: osdmap e302: 8 total, 8 up, 8 in 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[58955]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[51512]: osdmap e302: 8 total, 8 up, 8 in 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:24 vm05 ceph-mon[51512]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T13:43:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:24 vm09 ceph-mon[53367]: osdmap e302: 8 total, 8 up, 8 in 2026-03-10T13:43:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm05-91051-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:24 vm09 ceph-mon[53367]: osdmap e303: 8 total, 8 up, 8 in 2026-03-10T13:43:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:25 vm05 ceph-mon[58955]: pgmap v405: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:25 vm05 ceph-mon[58955]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T13:43:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:25 vm05 ceph-mon[51512]: pgmap v405: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:25 vm05 ceph-mon[51512]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T13:43:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:25 vm09 ceph-mon[53367]: pgmap v405: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:25 vm09 ceph-mon[53367]: osdmap e304: 8 total, 8 up, 8 in 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[58955]: pgmap v408: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[58955]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[51512]: pgmap v408: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[51512]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:27 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:27 vm09 ceph-mon[53367]: pgmap v408: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T13:43:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:27 vm09 ceph-mon[53367]: osdmap e305: 8 total, 8 up, 8 in 2026-03-10T13:43:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:27 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[58955]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[58955]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[51512]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:28 vm05 ceph-mon[51512]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T13:43:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:28 vm09 ceph-mon[53367]: osdmap e306: 8 total, 8 up, 8 in 2026-03-10T13:43:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]: dispatch 2026-03-10T13:43:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1303562542' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm05-91051-59"}]': finished 2026-03-10T13:43:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:28 vm09 ceph-mon[53367]: osdmap e307: 8 total, 8 up, 8 in 2026-03-10T13:43:28.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[58955]: pgmap v411: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]: dispatch 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[51512]: pgmap v411: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]: dispatch 2026-03-10T13:43:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:29 vm09 ceph-mon[53367]: pgmap v411: 292 pgs: 292 active+clean; 8.3 MiB data, 744 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]: dispatch 2026-03-10T13:43:30.314 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]': finished 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/514771823' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: from='client.50500 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: from='client.50500 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[58955]: osdmap e309: 8 total, 8 up, 8 in 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]': finished 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/514771823' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: from='client.50500 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: from='client.50500 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:30 vm05 ceph-mon[51512]: osdmap e309: 8 total, 8 up, 8 in 2026-03-10T13:43:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-45"}]': finished 2026-03-10T13:43:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: osdmap e308: 8 total, 8 up, 8 in 2026-03-10T13:43:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/514771823' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: from='client.50500 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: from='client.50500 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm05-91051-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:30 vm09 ceph-mon[53367]: osdmap e309: 8 total, 8 up, 8 in 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7462ddf6:::.RoundTripAppendPP (3237 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (3020 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (3001 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3055 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: setting pool EIO 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: max_success 98, min_failed 99 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (4016 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (3006 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (120564 ms total) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3038 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9150 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15262 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3450 ms) 2026-03-10T13:43:31.286 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (30901 ms total) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3338 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3338 ms total) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (14231 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (7396 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (7015 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (6350 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (3016 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (6985 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (7027 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (7477 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7102 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (7134 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7211 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7058 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7085 ms) 2026-03-10T13:43:31.287 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2 2026-03-10T13:43:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:43:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: osdmap e310: 8 total, 8 up, 8 in 2026-03-10T13:43:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: osdmap e310: 8 total, 8 up, 8 in 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: pgmap v414: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:43:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: osdmap e310: 8 total, 8 up, 8 in 2026-03-10T13:43:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: pgmap v417: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: osdmap e311: 8 total, 8 up, 8 in 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: pgmap v417: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: osdmap e311: 8 total, 8 up, 8 in 2026-03-10T13:43:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: pgmap v417: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 745 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm05-91051-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: osdmap e311: 8 total, 8 up, 8 in 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[51512]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[58955]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T13:43:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:34 vm09 ceph-mon[53367]: osdmap e312: 8 total, 8 up, 8 in 2026-03-10T13:43:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: pgmap v420: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]: dispatch 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: pgmap v420: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T13:43:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]: dispatch 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: pgmap v420: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm05-91051-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: osdmap e313: 8 total, 8 up, 8 in 2026-03-10T13:43:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]': finished 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]': finished 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-47", "mode": "writeback"}]': finished 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: osdmap e314: 8 total, 8 up, 8 in 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: osdmap e315: 8 total, 8 up, 8 in 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:43:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: pgmap v423: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: osdmap e316: 8 total, 8 up, 8 in 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:43:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]: dispatch 2026-03-10T13:43:38.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:39.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T13:43:39.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:43:39.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3302991072' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm05-91051-61"}]': finished 2026-03-10T13:43:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: osdmap e317: 8 total, 8 up, 8 in 2026-03-10T13:43:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:43:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:40.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:43:40.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: pgmap v426: 292 pgs: 292 active+clean; 8.3 MiB data, 729 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm05-91051-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: osdmap e318: 8 total, 8 up, 8 in 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:43:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[58955]: pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[58955]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[51512]: pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[51512]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T13:43:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:43:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:41 vm09 ceph-mon[53367]: pgmap v429: 292 pgs: 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:43:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:41 vm09 ceph-mon[53367]: osdmap e319: 8 total, 8 up, 8 in 2026-03-10T13:43:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[58955]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[51512]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T13:43:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm05-91051-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:43:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:42 vm09 ceph-mon[53367]: osdmap e320: 8 total, 8 up, 8 in 2026-03-10T13:43:42.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[58955]: pgmap v432: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[58955]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[51512]: pgmap v432: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:43:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[51512]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T13:43:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:43 vm09 ceph-mon[53367]: pgmap v432: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 734 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:43:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:43 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:43:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:43 vm09 ceph-mon[53367]: osdmap e321: 8 total, 8 up, 8 in 2026-03-10T13:43:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T13:43:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]': finished 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: osdmap e322: 8 total, 8 up, 8 in 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-47"}]: dispatch 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: osdmap e323: 8 total, 8 up, 8 in 2026-03-10T13:43:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]: dispatch 2026-03-10T13:43:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:45 vm09 ceph-mon[53367]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:43:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:45 vm09 ceph-mon[53367]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T13:43:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[58955]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[58955]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[51512]: pgmap v435: 292 pgs: 292 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/151869847' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm05-91051-62"}]': finished 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[51512]: osdmap e324: 8 total, 8 up, 8 in 2026-03-10T13:43:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T13:43:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm05-91051-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: osdmap e325: 8 total, 8 up, 8 in 2026-03-10T13:43:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[58955]: pgmap v438: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[58955]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[51512]: pgmap v438: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[51512]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T13:43:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:43:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:47 vm09 ceph-mon[53367]: pgmap v438: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:43:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:43:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:43:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:47 vm09 ceph-mon[53367]: osdmap e326: 8 total, 8 up, 8 in 2026-03-10T13:43:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:43:48.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]: dispatch 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:43:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T13:43:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]: dispatch 2026-03-10T13:43:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:49 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: pgmap v441: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 735 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm05-91051-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: osdmap e327: 8 total, 8 up, 8 in 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]: dispatch 2026-03-10T13:43:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:49 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]': finished 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[58955]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]': finished 2026-03-10T13:43:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:50 vm05 ceph-mon[51512]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T13:43:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:50 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:43:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-49", "mode": "readproxy"}]': finished 2026-03-10T13:43:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:50 vm09 ceph-mon[53367]: osdmap e328: 8 total, 8 up, 8 in 2026-03-10T13:43:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:51 vm09 ceph-mon[53367]: pgmap v444: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:51 vm09 ceph-mon[53367]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T13:43:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[58955]: pgmap v444: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[58955]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T13:43:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[51512]: pgmap v444: 300 pgs: 1 creating+peering, 7 unknown, 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[51512]: osdmap e329: 8 total, 8 up, 8 in 2026-03-10T13:43:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:52 vm09 ceph-mon[53367]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T13:43:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:52 vm09 ceph-mon[53367]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[58955]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[58955]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[51512]: osdmap e330: 8 total, 8 up, 8 in 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]: dispatch 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/602201989' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm05-91051-63"}]': finished 2026-03-10T13:43:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:52 vm05 ceph-mon[51512]: osdmap e331: 8 total, 8 up, 8 in 2026-03-10T13:43:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:53 vm09 ceph-mon[53367]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[58955]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[51512]: pgmap v447: 292 pgs: 292 active+clean; 8.3 MiB data, 736 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:43:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:43:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:54 vm09 ceph-mon[53367]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T13:43:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[58955]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T13:43:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:55.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm05-91051-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:43:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[51512]: osdmap e332: 8 total, 8 up, 8 in 2026-03-10T13:43:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:55 vm09 ceph-mon[53367]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:55 vm09 ceph-mon[53367]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T13:43:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:43:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:55 vm09 ceph-mon[53367]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T13:43:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[58955]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[58955]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T13:43:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:43:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[58955]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T13:43:56.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[51512]: pgmap v450: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[51512]: osdmap e333: 8 total, 8 up, 8 in 2026-03-10T13:43:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm05-91051-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:43:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:55 vm05 ceph-mon[51512]: osdmap e334: 8 total, 8 up, 8 in 2026-03-10T13:43:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:56 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:56 vm05 ceph-mon[58955]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T13:43:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:56 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:56 vm05 ceph-mon[51512]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T13:43:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:56 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:43:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:56 vm09 ceph-mon[53367]: osdmap e335: 8 total, 8 up, 8 in 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[58955]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[58955]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[51512]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[51512]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T13:43:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:57 vm09 ceph-mon[53367]: pgmap v453: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T13:43:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:57 vm09 ceph-mon[53367]: osdmap e336: 8 total, 8 up, 8 in 2026-03-10T13:43:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:43:58.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:43:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[58955]: pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[58955]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[51512]: pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[51512]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:43:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:44:00.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:43:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:43:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:59 vm09 ceph-mon[53367]: pgmap v456: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:59 vm09 ceph-mon[53367]: osdmap e337: 8 total, 8 up, 8 in 2026-03-10T13:44:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:43:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]: dispatch 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2583084598' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm05-91051-64"}]': finished 2026-03-10T13:44:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: osdmap e338: 8 total, 8 up, 8 in 2026-03-10T13:44:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: pgmap v459: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: pgmap v459: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T13:44:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: pgmap v459: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm05-91051-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: osdmap e339: 8 total, 8 up, 8 in 2026-03-10T13:44:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: 163:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:5d165639:::164:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f43765fc:::165:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b4c720e9:::166:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e694b040:::167:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:afa38db2:::168:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:77ba9f53:::169:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:87495034:::170:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7c96bf0e:::171:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:dbe346cc:::172:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:e943ec24:::173:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f97a9c0c:::174:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:6f26e74d:::175:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4f95e106:::176:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:0e6f2f8f:::177:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:05db05f1:::178:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:38a78d66:::179:head 2026-03-10T13:44:02.804 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:d095610b:::180:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:a1a9d709:::181:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1e5d39db:::182:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:f7df4fb9:::183:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:03a7f161:::184:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ba70721e:::185:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:28e5662d:::186:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:973d52de:::187:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:4303eb1c:::188:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:b990b48e:::189:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:29b8165b:::190:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:3547f197:::191:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:7e260936:::192:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:1abec7b1:::193:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:10fdda93:::194:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:15817eea:::195:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:770bab57:::196:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:ed9e13e7:::197:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:71471a8f:::198:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: checking for 268:10fb1d02:::199:head 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (9160 ms) 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first is 1773150198 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,1773150206,1773150207,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,1773150206,1773150207,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150198,1773150200,1773150201,1773150203,1773150204,1773150206,1773150207,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150201,1773150203,1773150204,1773150206,1773150207,1773150209,1773150210,0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first now 1773150201, trimmed 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20291 ms) 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: foo0 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14264 ms) 2026-03-10T13:44:02.805 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-10T13:44:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:44:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T13:44:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:03.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:03.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:03.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:02 vm09 ceph-mon[53367]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:03.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[58955]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]': finished 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: osdmap e340: 8 total, 8 up, 8 in 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-49"}]: dispatch 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm05-91051-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:03.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:02 vm05 ceph-mon[51512]: osdmap e341: 8 total, 8 up, 8 in 2026-03-10T13:44:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:03 vm09 ceph-mon[53367]: pgmap v462: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:03 vm09 ceph-mon[53367]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T13:44:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:03 vm05 ceph-mon[58955]: pgmap v462: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:03 vm05 ceph-mon[58955]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T13:44:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:03 vm05 ceph-mon[51512]: pgmap v462: 292 pgs: 292 active+clean; 8.3 MiB data, 738 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:03 vm05 ceph-mon[51512]: osdmap e342: 8 total, 8 up, 8 in 2026-03-10T13:44:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:04 vm09 ceph-mon[53367]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T13:44:05.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[58955]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:05.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[51512]: osdmap e343: 8 total, 8 up, 8 in 2026-03-10T13:44:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: pgmap v465: 300 pgs: 32 unknown, 8 creating+peering, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: pgmap v465: 300 pgs: 32 unknown, 8 creating+peering, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: pgmap v465: 300 pgs: 32 unknown, 8 creating+peering, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: osdmap e344: 8 total, 8 up, 8 in 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]: dispatch 2026-03-10T13:44:06.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: pgmap v468: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T13:44:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]: dispatch 2026-03-10T13:44:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: pgmap v468: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]: dispatch 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: pgmap v468: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2151469907' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm05-91051-65"}]': finished 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: osdmap e345: 8 total, 8 up, 8 in 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:08.859 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]': finished 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:44:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]': finished 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:44:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-51", "mode": "writeback"}]': finished 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm05-91051-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: osdmap e346: 8 total, 8 up, 8 in 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:44:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:44:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: pgmap v471: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T13:44:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: pgmap v471: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:44:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: pgmap v471: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 739 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: osdmap e347: 8 total, 8 up, 8 in 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm05-91051-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: osdmap e348: 8 total, 8 up, 8 in 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:44:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:10 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:11.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:44:11.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:10 vm09 ceph-mon[53367]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T13:44:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:44:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[58955]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T13:44:11.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:44:11.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:44:11.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:10 vm05 ceph-mon[51512]: osdmap e349: 8 total, 8 up, 8 in 2026-03-10T13:44:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: pgmap v474: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:44:12.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: pgmap v474: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:44:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: pgmap v474: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: osdmap e350: 8 total, 8 up, 8 in 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:44:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (3041 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7203 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7054 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7042 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7165 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7090 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (7037 ms) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (140719 ms total) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (295522 ms total) 2026-03-10T13:44:13.877 INFO:tasks.workunit.client.0.vm05.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-10T13:44:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:13 vm09 ceph-mon[53367]: pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:44:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:13 vm09 ceph-mon[53367]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T13:44:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[58955]: pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:44:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[58955]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T13:44:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[51512]: pgmap v477: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:44:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:44:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[51512]: osdmap e351: 8 total, 8 up, 8 in 2026-03-10T13:44:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]: dispatch 2026-03-10T13:44:15.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:15.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:14 vm05 ceph-mon[58955]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T13:44:15.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:15.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:14 vm05 ceph-mon[51512]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T13:44:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/387972138' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm05-91051-66"}]': finished 2026-03-10T13:44:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:14 vm09 ceph-mon[53367]: osdmap e352: 8 total, 8 up, 8 in 2026-03-10T13:44:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:15 vm05 ceph-mon[58955]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:15 vm05 ceph-mon[58955]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:44:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:15 vm05 ceph-mon[51512]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:15 vm05 ceph-mon[51512]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:44:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:15 vm09 ceph-mon[53367]: pgmap v480: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:15 vm09 ceph-mon[53367]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:44:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:16 vm05 ceph-mon[58955]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 983 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:16 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:17.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:16 vm05 ceph-mon[51512]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 983 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:17.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:16 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:16 vm09 ceph-mon[53367]: pgmap v481: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 983 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:16 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:18.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:19 vm09 ceph-mon[53367]: pgmap v482: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 825 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:19 vm05 ceph-mon[51512]: pgmap v482: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 825 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:19 vm05 ceph-mon[58955]: pgmap v482: 292 pgs: 292 active+clean; 8.3 MiB data, 740 MiB used, 159 GiB / 160 GiB avail; 825 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:21 vm09 ceph-mon[53367]: pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:21 vm05 ceph-mon[58955]: pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:21 vm05 ceph-mon[51512]: pgmap v483: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[58955]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[51512]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:23 vm09 ceph-mon[53367]: pgmap v484: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:44:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[58955]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[58955]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T13:44:25.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[51512]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T13:44:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:24 vm05 ceph-mon[51512]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T13:44:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:44:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:24 vm09 ceph-mon[53367]: osdmap e353: 8 total, 8 up, 8 in 2026-03-10T13:44:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]': finished 2026-03-10T13:44:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:24 vm09 ceph-mon[53367]: osdmap e354: 8 total, 8 up, 8 in 2026-03-10T13:44:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[58955]: pgmap v486: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:44:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[51512]: pgmap v486: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:44:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:25 vm09 ceph-mon[53367]: pgmap v486: 292 pgs: 292 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:44:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-51"}]: dispatch 2026-03-10T13:44:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:26 vm05 ceph-mon[58955]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T13:44:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:26 vm05 ceph-mon[51512]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T13:44:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:26 vm09 ceph-mon[53367]: osdmap e355: 8 total, 8 up, 8 in 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[58955]: pgmap v489: 260 pgs: 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[58955]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[51512]: pgmap v489: 260 pgs: 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[51512]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T13:44:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:27 vm09 ceph-mon[53367]: pgmap v489: 260 pgs: 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail; 170 B/s rd, 0 op/s 2026-03-10T13:44:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:27 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:44:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:27 vm09 ceph-mon[53367]: osdmap e356: 8 total, 8 up, 8 in 2026-03-10T13:44:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:28.790 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[58955]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-53"}]: dispatch 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[51512]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-53"}]: dispatch 2026-03-10T13:44:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:28 vm09 ceph-mon[53367]: osdmap e357: 8 total, 8 up, 8 in 2026-03-10T13:44:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:29.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-53"}]: dispatch 2026-03-10T13:44:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[58955]: pgmap v492: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[58955]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[51512]: pgmap v492: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:29 vm05 ceph-mon[51512]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T13:44:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:29 vm09 ceph-mon[53367]: pgmap v492: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 758 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:29 vm09 ceph-mon[53367]: osdmap e358: 8 total, 8 up, 8 in 2026-03-10T13:44:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[58955]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T13:44:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[58955]: pgmap v495: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[51512]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T13:44:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:30 vm05 ceph-mon[51512]: pgmap v495: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:31 vm09 ceph-mon[53367]: osdmap e359: 8 total, 8 up, 8 in 2026-03-10T13:44:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:31 vm09 ceph-mon[53367]: pgmap v495: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T13:44:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-55"}]: dispatch 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-55"}]: dispatch 2026-03-10T13:44:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:32.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:32.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: osdmap e360: 8 total, 8 up, 8 in 2026-03-10T13:44:32.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:32.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:32.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-55"}]: dispatch 2026-03-10T13:44:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:33 vm05 ceph-mon[58955]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T13:44:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:33 vm05 ceph-mon[58955]: pgmap v498: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:33.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:33 vm05 ceph-mon[51512]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T13:44:33.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:33 vm05 ceph-mon[51512]: pgmap v498: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:33 vm09 ceph-mon[53367]: osdmap e361: 8 total, 8 up, 8 in 2026-03-10T13:44:33.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:33 vm09 ceph-mon[53367]: pgmap v498: 260 pgs: 260 active+clean; 8.3 MiB data, 759 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[58955]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[58955]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[51512]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:34 vm05 ceph-mon[51512]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T13:44:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:34 vm09 ceph-mon[53367]: osdmap e362: 8 total, 8 up, 8 in 2026-03-10T13:44:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:34 vm09 ceph-mon[53367]: osdmap e363: 8 total, 8 up, 8 in 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-57"}]: dispatch 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[58955]: pgmap v501: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[58955]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-57"}]: dispatch 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[51512]: pgmap v501: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:35 vm05 ceph-mon[51512]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T13:44:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-57"}]: dispatch 2026-03-10T13:44:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:35 vm09 ceph-mon[53367]: pgmap v501: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:35 vm09 ceph-mon[53367]: osdmap e364: 8 total, 8 up, 8 in 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[58955]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[58955]: pgmap v504: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[51512]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[51512]: pgmap v504: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:37 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:37 vm09 ceph-mon[53367]: osdmap e365: 8 total, 8 up, 8 in 2026-03-10T13:44:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:37 vm09 ceph-mon[53367]: pgmap v504: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:44:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:37 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-59"}]: dispatch 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-59"}]: dispatch 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: osdmap e366: 8 total, 8 up, 8 in 2026-03-10T13:44:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-59"}]: dispatch 2026-03-10T13:44:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:38.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:39 vm09 ceph-mon[53367]: pgmap v506: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:39 vm09 ceph-mon[53367]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T13:44:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:39 vm09 ceph-mon[53367]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T13:44:39.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[58955]: pgmap v506: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[58955]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[58955]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[51512]: pgmap v506: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 838 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[51512]: osdmap e367: 8 total, 8 up, 8 in 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[51512]: osdmap e368: 8 total, 8 up, 8 in 2026-03-10T13:44:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: osdmap e369: 8 total, 8 up, 8 in 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:41 vm09 ceph-mon[53367]: pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:41 vm09 ceph-mon[53367]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[58955]: pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[58955]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[51512]: pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:41 vm05 ceph-mon[51512]: osdmap e370: 8 total, 8 up, 8 in 2026-03-10T13:44:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-61"}]: dispatch 2026-03-10T13:44:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:42 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-61"}]: dispatch 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-61"}]: dispatch 2026-03-10T13:44:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:42 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:43 vm09 ceph-mon[53367]: pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:43 vm09 ceph-mon[53367]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T13:44:43.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:43 vm05 ceph-mon[58955]: pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:43.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:43 vm05 ceph-mon[58955]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T13:44:43.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:43 vm05 ceph-mon[51512]: pgmap v512: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T13:44:43.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:43 vm05 ceph-mon[51512]: osdmap e371: 8 total, 8 up, 8 in 2026-03-10T13:44:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:44 vm09 ceph-mon[53367]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T13:44:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:44.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:44 vm05 ceph-mon[58955]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T13:44:44.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:44.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:44 vm05 ceph-mon[51512]: osdmap e372: 8 total, 8 up, 8 in 2026-03-10T13:44:44.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[58955]: pgmap v515: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[58955]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[51512]: pgmap v515: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[51512]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:45.483 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:45 vm09 ceph-mon[53367]: pgmap v515: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:44:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:45 vm09 ceph-mon[53367]: osdmap e373: 8 total, 8 up, 8 in 2026-03-10T13:44:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:44:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:44:46.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:46.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:46 vm05 ceph-mon[51512]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T13:44:46.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:46.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:46 vm05 ceph-mon[58955]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T13:44:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:44:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:46 vm09 ceph-mon[53367]: osdmap e374: 8 total, 8 up, 8 in 2026-03-10T13:44:47.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[58955]: pgmap v518: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:47.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[58955]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T13:44:47.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[51512]: pgmap v518: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:47.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:47.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:47 vm05 ceph-mon[51512]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T13:44:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:47 vm09 ceph-mon[53367]: pgmap v518: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:44:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:47 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:44:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:47 vm09 ceph-mon[53367]: osdmap e375: 8 total, 8 up, 8 in 2026-03-10T13:44:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:48 vm05 ceph-mon[58955]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T13:44:48.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:48 vm05 ceph-mon[51512]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T13:44:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:48 vm09 ceph-mon[53367]: osdmap e376: 8 total, 8 up, 8 in 2026-03-10T13:44:48.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[58955]: pgmap v521: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[58955]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[51512]: pgmap v521: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[51512]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T13:44:49.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:49 vm09 ceph-mon[53367]: pgmap v521: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 839 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:44:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:49 vm09 ceph-mon[53367]: osdmap e377: 8 total, 8 up, 8 in 2026-03-10T13:44:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:50.306 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:44:50.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:50 vm05 ceph-mon[58955]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T13:44:50.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:50 vm05 ceph-mon[51512]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T13:44:50.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:50 vm09 ceph-mon[53367]: osdmap e378: 8 total, 8 up, 8 in 2026-03-10T13:44:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:51 vm05 ceph-mon[58955]: pgmap v524: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T13:44:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:51 vm05 ceph-mon[51512]: pgmap v524: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T13:44:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:51 vm09 ceph-mon[53367]: pgmap v524: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T13:44:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:53 vm09 ceph-mon[53367]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.9 KiB/s wr, 5 op/s 2026-03-10T13:44:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:53 vm05 ceph-mon[58955]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.9 KiB/s wr, 5 op/s 2026-03-10T13:44:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:53 vm05 ceph-mon[51512]: pgmap v525: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 1.9 KiB/s wr, 5 op/s 2026-03-10T13:44:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:44:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:55 vm09 ceph-mon[53367]: pgmap v526: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T13:44:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:55 vm05 ceph-mon[58955]: pgmap v526: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T13:44:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:55 vm05 ceph-mon[51512]: pgmap v526: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 5 op/s 2026-03-10T13:44:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:57 vm09 ceph-mon[53367]: pgmap v527: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T13:44:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:57 vm05 ceph-mon[58955]: pgmap v527: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T13:44:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:57 vm05 ceph-mon[51512]: pgmap v527: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T13:44:58.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:44:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:44:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:59 vm09 ceph-mon[53367]: pgmap v528: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:44:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:44:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:59 vm05 ceph-mon[58955]: pgmap v528: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:44:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:44:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:44:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:59 vm05 ceph-mon[51512]: pgmap v528: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s 2026-03-10T13:44:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:44:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:44:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:44:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:00.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:00 vm09 ceph-mon[53367]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T13:45:00.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:00 vm05 ceph-mon[58955]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T13:45:00.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:00 vm05 ceph-mon[51512]: osdmap e379: 8 total, 8 up, 8 in 2026-03-10T13:45:01.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:01 vm09 ceph-mon[53367]: pgmap v530: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:01.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:01 vm09 ceph-mon[53367]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T13:45:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:01 vm05 ceph-mon[58955]: pgmap v530: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:01 vm05 ceph-mon[58955]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T13:45:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:01 vm05 ceph-mon[51512]: pgmap v530: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:01 vm05 ceph-mon[51512]: osdmap e380: 8 total, 8 up, 8 in 2026-03-10T13:45:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:03 vm09 ceph-mon[53367]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:03 vm05 ceph-mon[58955]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:03 vm05 ceph-mon[51512]: pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:05 vm09 ceph-mon[53367]: pgmap v533: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:05 vm05 ceph-mon[58955]: pgmap v533: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:05 vm05 ceph-mon[51512]: pgmap v533: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:07 vm09 ceph-mon[53367]: pgmap v534: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:07 vm05 ceph-mon[58955]: pgmap v534: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:07 vm05 ceph-mon[51512]: pgmap v534: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:08.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:09 vm09 ceph-mon[53367]: pgmap v535: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:09 vm05 ceph-mon[58955]: pgmap v535: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:09 vm05 ceph-mon[51512]: pgmap v535: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:10 vm09 ceph-mon[53367]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T13:45:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:10 vm05 ceph-mon[58955]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T13:45:10.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:10 vm05 ceph-mon[51512]: osdmap e381: 8 total, 8 up, 8 in 2026-03-10T13:45:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:11 vm09 ceph-mon[53367]: pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:11 vm09 ceph-mon[53367]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T13:45:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:11 vm05 ceph-mon[58955]: pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:11 vm05 ceph-mon[58955]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T13:45:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:11 vm05 ceph-mon[51512]: pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:45:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:11 vm05 ceph-mon[51512]: osdmap e382: 8 total, 8 up, 8 in 2026-03-10T13:45:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:13 vm09 ceph-mon[53367]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:13 vm05 ceph-mon[58955]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:13 vm05 ceph-mon[51512]: pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:45:15.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:15 vm09 ceph-mon[53367]: pgmap v540: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:15 vm05 ceph-mon[58955]: pgmap v540: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:15 vm05 ceph-mon[51512]: pgmap v540: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:17 vm05 ceph-mon[58955]: pgmap v541: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:17 vm05 ceph-mon[51512]: pgmap v541: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:17 vm09 ceph-mon[53367]: pgmap v541: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:45:18.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[58955]: pgmap v542: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 706 B/s rd, 0 op/s 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-63"}]: dispatch 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[51512]: pgmap v542: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 706 B/s rd, 0 op/s 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-63"}]: dispatch 2026-03-10T13:45:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:19 vm09 ceph-mon[53367]: pgmap v542: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail; 706 B/s rd, 0 op/s 2026-03-10T13:45:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-63"}]: dispatch 2026-03-10T13:45:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:20 vm05 ceph-mon[58955]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T13:45:20.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:20 vm05 ceph-mon[51512]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T13:45:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:20 vm09 ceph-mon[53367]: osdmap e383: 8 total, 8 up, 8 in 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: pgmap v544: 260 pgs: 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: osdmap e384: 8 total, 8 up, 8 in 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: osdmap e385: 8 total, 8 up, 8 in 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: pgmap v544: 260 pgs: 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: osdmap e384: 8 total, 8 up, 8 in 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: osdmap e385: 8 total, 8 up, 8 in 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: pgmap v544: 260 pgs: 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: osdmap e384: 8 total, 8 up, 8 in 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: osdmap e385: 8 total, 8 up, 8 in 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[58955]: pgmap v547: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[58955]: osdmap e386: 8 total, 8 up, 8 in 2026-03-10T13:45:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:23.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[51512]: pgmap v547: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[51512]: osdmap e386: 8 total, 8 up, 8 in 2026-03-10T13:45:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:23 vm09 ceph-mon[53367]: pgmap v547: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 858 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:23 vm09 ceph-mon[53367]: osdmap e386: 8 total, 8 up, 8 in 2026-03-10T13:45:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:24 vm05 ceph-mon[58955]: osdmap e387: 8 total, 8 up, 8 in 2026-03-10T13:45:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:24 vm05 ceph-mon[51512]: osdmap e387: 8 total, 8 up, 8 in 2026-03-10T13:45:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:24 vm09 ceph-mon[53367]: osdmap e387: 8 total, 8 up, 8 in 2026-03-10T13:45:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:25 vm05 ceph-mon[58955]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:25 vm05 ceph-mon[58955]: osdmap e388: 8 total, 8 up, 8 in 2026-03-10T13:45:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:25 vm05 ceph-mon[51512]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:25 vm05 ceph-mon[51512]: osdmap e388: 8 total, 8 up, 8 in 2026-03-10T13:45:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:25 vm09 ceph-mon[53367]: pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:25 vm09 ceph-mon[53367]: osdmap e388: 8 total, 8 up, 8 in 2026-03-10T13:45:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:26 vm05 ceph-mon[58955]: osdmap e389: 8 total, 8 up, 8 in 2026-03-10T13:45:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:26 vm05 ceph-mon[51512]: osdmap e389: 8 total, 8 up, 8 in 2026-03-10T13:45:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:26 vm09 ceph-mon[53367]: osdmap e389: 8 total, 8 up, 8 in 2026-03-10T13:45:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:27 vm05 ceph-mon[58955]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:27 vm05 ceph-mon[51512]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:27 vm09 ceph-mon[53367]: pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s 2026-03-10T13:45:28.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:29 vm05 ceph-mon[58955]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T13:45:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:29 vm05 ceph-mon[51512]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T13:45:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:29 vm09 ceph-mon[53367]: pgmap v554: 292 pgs: 292 active+clean; 8.3 MiB data, 859 MiB used, 159 GiB / 160 GiB avail; 866 B/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T13:45:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:31 vm05 ceph-mon[58955]: pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 2.9 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-10T13:45:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:31 vm05 ceph-mon[58955]: osdmap e390: 8 total, 8 up, 8 in 2026-03-10T13:45:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:31 vm05 ceph-mon[51512]: pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 2.9 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-10T13:45:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:31 vm05 ceph-mon[51512]: osdmap e390: 8 total, 8 up, 8 in 2026-03-10T13:45:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:31 vm09 ceph-mon[53367]: pgmap v555: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 2.9 KiB/s rd, 1.4 KiB/s wr, 6 op/s 2026-03-10T13:45:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:31 vm09 ceph-mon[53367]: osdmap e390: 8 total, 8 up, 8 in 2026-03-10T13:45:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:33 vm05 ceph-mon[58955]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 259 B/s wr, 3 op/s 2026-03-10T13:45:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:33 vm05 ceph-mon[51512]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 259 B/s wr, 3 op/s 2026-03-10T13:45:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:33 vm09 ceph-mon[53367]: pgmap v557: 292 pgs: 292 active+clean; 8.3 MiB data, 877 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 259 B/s wr, 3 op/s 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[58955]: pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 230 B/s wr, 3 op/s 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-65"}]: dispatch 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[51512]: pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 230 B/s wr, 3 op/s 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:35.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-65"}]: dispatch 2026-03-10T13:45:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:35 vm09 ceph-mon[53367]: pgmap v558: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.3 KiB/s rd, 230 B/s wr, 3 op/s 2026-03-10T13:45:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-65"}]: dispatch 2026-03-10T13:45:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:37 vm05 ceph-mon[58955]: pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 204 B/s wr, 3 op/s 2026-03-10T13:45:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:37 vm05 ceph-mon[58955]: osdmap e391: 8 total, 8 up, 8 in 2026-03-10T13:45:37.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:37 vm05 ceph-mon[51512]: pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 204 B/s wr, 3 op/s 2026-03-10T13:45:37.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:37 vm05 ceph-mon[51512]: osdmap e391: 8 total, 8 up, 8 in 2026-03-10T13:45:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:37 vm09 ceph-mon[53367]: pgmap v559: 292 pgs: 292 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 204 B/s wr, 3 op/s 2026-03-10T13:45:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:37 vm09 ceph-mon[53367]: osdmap e391: 8 total, 8 up, 8 in 2026-03-10T13:45:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[58955]: osdmap e392: 8 total, 8 up, 8 in 2026-03-10T13:45:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[51512]: osdmap e392: 8 total, 8 up, 8 in 2026-03-10T13:45:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:38 vm09 ceph-mon[53367]: osdmap e392: 8 total, 8 up, 8 in 2026-03-10T13:45:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:38.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: osdmap e393: 8 total, 8 up, 8 in 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: osdmap e393: 8 total, 8 up, 8 in 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:39.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: pgmap v562: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: osdmap e393: 8 total, 8 up, 8 in 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:40.283 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: osdmap e394: 8 total, 8 up, 8 in 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-67"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: osdmap e394: 8 total, 8 up, 8 in 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-67"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:40 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: osdmap e394: 8 total, 8 up, 8 in 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-67"}]: dispatch 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:40.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:40 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:45:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:41 vm05 ceph-mon[58955]: pgmap v565: 292 pgs: 3 creating+activating, 17 creating+peering, 272 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:41 vm05 ceph-mon[58955]: osdmap e395: 8 total, 8 up, 8 in 2026-03-10T13:45:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:41 vm05 ceph-mon[51512]: pgmap v565: 292 pgs: 3 creating+activating, 17 creating+peering, 272 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:41 vm05 ceph-mon[51512]: osdmap e395: 8 total, 8 up, 8 in 2026-03-10T13:45:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:41 vm09 ceph-mon[53367]: pgmap v565: 292 pgs: 3 creating+activating, 17 creating+peering, 272 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:41 vm09 ceph-mon[53367]: osdmap e395: 8 total, 8 up, 8 in 2026-03-10T13:45:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:42 vm09 ceph-mon[53367]: osdmap e396: 8 total, 8 up, 8 in 2026-03-10T13:45:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:42 vm05 ceph-mon[58955]: osdmap e396: 8 total, 8 up, 8 in 2026-03-10T13:45:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:42 vm05 ceph-mon[51512]: osdmap e396: 8 total, 8 up, 8 in 2026-03-10T13:45:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: pgmap v568: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: osdmap e397: 8 total, 8 up, 8 in 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-69"}]: dispatch 2026-03-10T13:45:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:43 vm09 ceph-mon[53367]: osdmap e398: 8 total, 8 up, 8 in 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: pgmap v568: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: osdmap e397: 8 total, 8 up, 8 in 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-69"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[58955]: osdmap e398: 8 total, 8 up, 8 in 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: pgmap v568: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: osdmap e397: 8 total, 8 up, 8 in 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-69"}]: dispatch 2026-03-10T13:45:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:43 vm05 ceph-mon[51512]: osdmap e398: 8 total, 8 up, 8 in 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[58955]: pgmap v571: 260 pgs: 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[58955]: osdmap e399: 8 total, 8 up, 8 in 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[51512]: pgmap v571: 260 pgs: 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:45:45.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[51512]: osdmap e399: 8 total, 8 up, 8 in 2026-03-10T13:45:45.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:45.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:45 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:45 vm09 ceph-mon[53367]: pgmap v571: 260 pgs: 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:45:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:45 vm09 ceph-mon[53367]: osdmap e399: 8 total, 8 up, 8 in 2026-03-10T13:45:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:45.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:45 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: osdmap e400: 8 total, 8 up, 8 in 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:46.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:46 vm09 ceph-mon[53367]: osdmap e401: 8 total, 8 up, 8 in 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: osdmap e400: 8 total, 8 up, 8 in 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[58955]: osdmap e401: 8 total, 8 up, 8 in 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: osdmap e400: 8 total, 8 up, 8 in 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:45:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:46 vm05 ceph-mon[51512]: osdmap e401: 8 total, 8 up, 8 in 2026-03-10T13:45:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:47 vm09 ceph-mon[53367]: pgmap v574: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:45:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:47 vm05 ceph-mon[58955]: pgmap v574: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:45:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:47 vm05 ceph-mon[51512]: pgmap v574: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:45:48.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:48 vm09 ceph-mon[53367]: osdmap e402: 8 total, 8 up, 8 in 2026-03-10T13:45:48.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:48 vm05 ceph-mon[58955]: osdmap e402: 8 total, 8 up, 8 in 2026-03-10T13:45:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:48 vm05 ceph-mon[51512]: osdmap e402: 8 total, 8 up, 8 in 2026-03-10T13:45:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:49 vm09 ceph-mon[53367]: pgmap v577: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:49 vm05 ceph-mon[58955]: pgmap v577: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:49 vm05 ceph-mon[51512]: pgmap v577: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 882 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:45:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:45:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:51 vm09 ceph-mon[53367]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T13:45:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:51 vm05 ceph-mon[58955]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T13:45:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:51 vm05 ceph-mon[51512]: pgmap v578: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T13:45:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:53 vm05 ceph-mon[58955]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 905 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T13:45:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:53 vm05 ceph-mon[51512]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 905 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T13:45:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:53 vm09 ceph-mon[53367]: pgmap v579: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 905 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T13:45:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:45:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:55 vm05 ceph-mon[58955]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T13:45:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:55 vm05 ceph-mon[51512]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T13:45:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:55 vm09 ceph-mon[53367]: pgmap v580: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.6 KiB/s wr, 4 op/s 2026-03-10T13:45:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:57 vm05 ceph-mon[58955]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T13:45:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:57 vm05 ceph-mon[51512]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T13:45:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:57 vm09 ceph-mon[53367]: pgmap v581: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T13:45:58.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:45:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:45:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:59 vm05 ceph-mon[58955]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:45:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:45:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:59 vm05 ceph-mon[51512]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:45:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:45:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:45:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:59 vm09 ceph-mon[53367]: pgmap v582: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:45:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:45:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:45:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:45:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:01 vm05 ceph-mon[58955]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:46:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:01 vm05 ceph-mon[51512]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:46:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:01 vm09 ceph-mon[53367]: pgmap v583: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:46:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:03 vm05 ceph-mon[58955]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:03 vm05 ceph-mon[51512]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:03 vm09 ceph-mon[53367]: pgmap v584: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:05 vm05 ceph-mon[58955]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:05 vm05 ceph-mon[51512]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:05 vm09 ceph-mon[53367]: pgmap v585: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[58955]: pgmap v586: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:07.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-71"}]: dispatch 2026-03-10T13:46:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[51512]: pgmap v586: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-71"}]: dispatch 2026-03-10T13:46:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:07 vm09 ceph-mon[53367]: pgmap v586: 292 pgs: 292 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:46:07.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:07.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-71"}]: dispatch 2026-03-10T13:46:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[58955]: osdmap e403: 8 total, 8 up, 8 in 2026-03-10T13:46:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[58955]: osdmap e404: 8 total, 8 up, 8 in 2026-03-10T13:46:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[51512]: osdmap e403: 8 total, 8 up, 8 in 2026-03-10T13:46:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[51512]: osdmap e404: 8 total, 8 up, 8 in 2026-03-10T13:46:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:08 vm09 ceph-mon[53367]: osdmap e403: 8 total, 8 up, 8 in 2026-03-10T13:46:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:08 vm09 ceph-mon[53367]: osdmap e404: 8 total, 8 up, 8 in 2026-03-10T13:46:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:08.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[58955]: pgmap v588: 260 pgs: 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T13:46:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[58955]: osdmap e405: 8 total, 8 up, 8 in 2026-03-10T13:46:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[51512]: pgmap v588: 260 pgs: 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T13:46:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[51512]: osdmap e405: 8 total, 8 up, 8 in 2026-03-10T13:46:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:09 vm09 ceph-mon[53367]: pgmap v588: 260 pgs: 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T13:46:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:09 vm09 ceph-mon[53367]: osdmap e405: 8 total, 8 up, 8 in 2026-03-10T13:46:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:11 vm05 ceph-mon[58955]: pgmap v591: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:11 vm05 ceph-mon[58955]: osdmap e406: 8 total, 8 up, 8 in 2026-03-10T13:46:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:11 vm05 ceph-mon[51512]: pgmap v591: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:11 vm05 ceph-mon[51512]: osdmap e406: 8 total, 8 up, 8 in 2026-03-10T13:46:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:11 vm09 ceph-mon[53367]: pgmap v591: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:11 vm09 ceph-mon[53367]: osdmap e406: 8 total, 8 up, 8 in 2026-03-10T13:46:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:12 vm09 ceph-mon[53367]: osdmap e407: 8 total, 8 up, 8 in 2026-03-10T13:46:13.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:12 vm05 ceph-mon[58955]: osdmap e407: 8 total, 8 up, 8 in 2026-03-10T13:46:13.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:12 vm05 ceph-mon[51512]: osdmap e407: 8 total, 8 up, 8 in 2026-03-10T13:46:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:13 vm09 ceph-mon[53367]: pgmap v594: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:13 vm05 ceph-mon[58955]: pgmap v594: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:14.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:13 vm05 ceph-mon[51512]: pgmap v594: 292 pgs: 15 creating+peering, 17 unknown, 260 active+clean; 8.3 MiB data, 900 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:15.884 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:15 vm05 ceph-mon[58955]: pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:46:15.884 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:15 vm05 ceph-mon[51512]: pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:46:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:15 vm09 ceph-mon[53367]: pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 4 op/s 2026-03-10T13:46:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:17 vm09 ceph-mon[53367]: pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T13:46:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:17 vm05 ceph-mon[58955]: pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T13:46:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:17 vm05 ceph-mon[51512]: pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T13:46:18.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:19 vm09 ceph-mon[53367]: pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T13:46:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:19.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:19 vm05 ceph-mon[58955]: pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T13:46:19.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:19.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:19 vm05 ceph-mon[51512]: pgmap v597: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T13:46:19.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[58955]: pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 746 B/s wr, 3 op/s 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-73"}]: dispatch 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[51512]: pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 746 B/s wr, 3 op/s 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-73"}]: dispatch 2026-03-10T13:46:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:21 vm09 ceph-mon[53367]: pgmap v598: 292 pgs: 292 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 746 B/s wr, 3 op/s 2026-03-10T13:46:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-73"}]: dispatch 2026-03-10T13:46:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[58955]: osdmap e408: 8 total, 8 up, 8 in 2026-03-10T13:46:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[58955]: osdmap e409: 8 total, 8 up, 8 in 2026-03-10T13:46:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[51512]: osdmap e408: 8 total, 8 up, 8 in 2026-03-10T13:46:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[51512]: osdmap e409: 8 total, 8 up, 8 in 2026-03-10T13:46:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:22 vm09 ceph-mon[53367]: osdmap e408: 8 total, 8 up, 8 in 2026-03-10T13:46:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:22 vm09 ceph-mon[53367]: osdmap e409: 8 total, 8 up, 8 in 2026-03-10T13:46:23.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[58955]: pgmap v600: 260 pgs: 260 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T13:46:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[58955]: osdmap e410: 8 total, 8 up, 8 in 2026-03-10T13:46:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[51512]: pgmap v600: 260 pgs: 260 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T13:46:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[51512]: osdmap e410: 8 total, 8 up, 8 in 2026-03-10T13:46:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:23 vm09 ceph-mon[53367]: pgmap v600: 260 pgs: 260 active+clean; 8.3 MiB data, 919 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T13:46:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:23 vm09 ceph-mon[53367]: osdmap e410: 8 total, 8 up, 8 in 2026-03-10T13:46:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:25 vm05 ceph-mon[58955]: pgmap v603: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:25 vm05 ceph-mon[58955]: osdmap e411: 8 total, 8 up, 8 in 2026-03-10T13:46:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:25 vm05 ceph-mon[51512]: pgmap v603: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:25 vm05 ceph-mon[51512]: osdmap e411: 8 total, 8 up, 8 in 2026-03-10T13:46:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:25 vm09 ceph-mon[53367]: pgmap v603: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:25 vm09 ceph-mon[53367]: osdmap e411: 8 total, 8 up, 8 in 2026-03-10T13:46:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[51512]: osdmap e412: 8 total, 8 up, 8 in 2026-03-10T13:46:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-75"}]: dispatch 2026-03-10T13:46:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[58955]: osdmap e412: 8 total, 8 up, 8 in 2026-03-10T13:46:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:27.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-75"}]: dispatch 2026-03-10T13:46:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:26 vm09 ceph-mon[53367]: osdmap e412: 8 total, 8 up, 8 in 2026-03-10T13:46:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-75"}]: dispatch 2026-03-10T13:46:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:27 vm05 ceph-mon[51512]: pgmap v606: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:46:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:27 vm05 ceph-mon[51512]: osdmap e413: 8 total, 8 up, 8 in 2026-03-10T13:46:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:27 vm05 ceph-mon[58955]: pgmap v606: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:46:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:27 vm05 ceph-mon[58955]: osdmap e413: 8 total, 8 up, 8 in 2026-03-10T13:46:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:27 vm09 ceph-mon[53367]: pgmap v606: 292 pgs: 9 creating+activating, 18 creating+peering, 265 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:46:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:27 vm09 ceph-mon[53367]: osdmap e413: 8 total, 8 up, 8 in 2026-03-10T13:46:28.843 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:28 vm09 ceph-mon[53367]: osdmap e414: 8 total, 8 up, 8 in 2026-03-10T13:46:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:29.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:28 vm05 ceph-mon[51512]: osdmap e414: 8 total, 8 up, 8 in 2026-03-10T13:46:29.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:29.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:28 vm05 ceph-mon[58955]: osdmap e414: 8 total, 8 up, 8 in 2026-03-10T13:46:29.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: pgmap v609: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: osdmap e415: 8 total, 8 up, 8 in 2026-03-10T13:46:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: pgmap v609: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: osdmap e415: 8 total, 8 up, 8 in 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: pgmap v609: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 923 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: osdmap e415: 8 total, 8 up, 8 in 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:30 vm09 ceph-mon[53367]: osdmap e416: 8 total, 8 up, 8 in 2026-03-10T13:46:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-77"}]: dispatch 2026-03-10T13:46:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[51512]: osdmap e416: 8 total, 8 up, 8 in 2026-03-10T13:46:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-77"}]: dispatch 2026-03-10T13:46:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[58955]: osdmap e416: 8 total, 8 up, 8 in 2026-03-10T13:46:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:31.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-77"}]: dispatch 2026-03-10T13:46:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:31 vm09 ceph-mon[53367]: pgmap v612: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:46:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:31 vm09 ceph-mon[53367]: osdmap e417: 8 total, 8 up, 8 in 2026-03-10T13:46:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:31 vm05 ceph-mon[51512]: pgmap v612: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:46:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:31 vm05 ceph-mon[51512]: osdmap e417: 8 total, 8 up, 8 in 2026-03-10T13:46:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:31 vm05 ceph-mon[58955]: pgmap v612: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:46:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:31 vm05 ceph-mon[58955]: osdmap e417: 8 total, 8 up, 8 in 2026-03-10T13:46:33.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:32 vm09 ceph-mon[53367]: osdmap e418: 8 total, 8 up, 8 in 2026-03-10T13:46:33.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:33.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:32 vm05 ceph-mon[51512]: osdmap e418: 8 total, 8 up, 8 in 2026-03-10T13:46:33.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:32 vm05 ceph-mon[58955]: osdmap e418: 8 total, 8 up, 8 in 2026-03-10T13:46:33.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:33 vm09 ceph-mon[53367]: pgmap v615: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:46:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:33 vm09 ceph-mon[53367]: osdmap e419: 8 total, 8 up, 8 in 2026-03-10T13:46:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[51512]: pgmap v615: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[51512]: osdmap e419: 8 total, 8 up, 8 in 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[58955]: pgmap v615: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[58955]: osdmap e419: 8 total, 8 up, 8 in 2026-03-10T13:46:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:34 vm05 ceph-mon[51512]: osdmap e420: 8 total, 8 up, 8 in 2026-03-10T13:46:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:34 vm05 ceph-mon[58955]: osdmap e420: 8 total, 8 up, 8 in 2026-03-10T13:46:35.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:34 vm09 ceph-mon[53367]: osdmap e420: 8 total, 8 up, 8 in 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[51512]: pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[51512]: osdmap e421: 8 total, 8 up, 8 in 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[51512]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[58955]: pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[58955]: osdmap e421: 8 total, 8 up, 8 in 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:35 vm05 ceph-mon[58955]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:35 vm09 ceph-mon[53367]: pgmap v618: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:35 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:46:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:35 vm09 ceph-mon[53367]: osdmap e421: 8 total, 8 up, 8 in 2026-03-10T13:46:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:35 vm09 ceph-mon[53367]: from='mon.0 v1:192.168.123.105:0/2692263334' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.6"}]: dispatch 2026-03-10T13:46:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:36 vm05 ceph-mon[51512]: 297.6 deep-scrub starts 2026-03-10T13:46:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:36 vm05 ceph-mon[51512]: 297.6 deep-scrub ok 2026-03-10T13:46:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:36 vm05 ceph-mon[58955]: 297.6 deep-scrub starts 2026-03-10T13:46:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:36 vm05 ceph-mon[58955]: 297.6 deep-scrub ok 2026-03-10T13:46:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:36 vm09 ceph-mon[53367]: 297.6 deep-scrub starts 2026-03-10T13:46:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:36 vm09 ceph-mon[53367]: 297.6 deep-scrub ok 2026-03-10T13:46:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:37 vm05 ceph-mon[51512]: pgmap v620: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:37 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:37 vm05 ceph-mon[58955]: pgmap v620: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:37 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:37 vm09 ceph-mon[53367]: pgmap v620: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T13:46:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:37 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:38.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:39 vm05 ceph-mon[58955]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:46:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:39 vm05 ceph-mon[51512]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:46:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:39 vm09 ceph-mon[53367]: pgmap v621: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:46:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:40.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 990 B/s wr, 3 op/s 2026-03-10T13:46:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:41.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:41.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 990 B/s wr, 3 op/s 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: pgmap v622: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 990 B/s wr, 3 op/s 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:46:43.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:43 vm09 ceph-mon[53367]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 624 B/s rd, 249 B/s wr, 1 op/s 2026-03-10T13:46:43.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:43 vm05 ceph-mon[58955]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 624 B/s rd, 249 B/s wr, 1 op/s 2026-03-10T13:46:43.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:43 vm05 ceph-mon[51512]: pgmap v623: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 624 B/s rd, 249 B/s wr, 1 op/s 2026-03-10T13:46:45.543 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:45 vm09 ceph-mon[53367]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T13:46:45.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:45 vm05 ceph-mon[58955]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T13:46:45.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:45 vm05 ceph-mon[51512]: pgmap v624: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T13:46:47.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:47 vm05 ceph-mon[58955]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 915 B/s rd, 183 B/s wr, 1 op/s 2026-03-10T13:46:47.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:47 vm05 ceph-mon[51512]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 915 B/s rd, 183 B/s wr, 1 op/s 2026-03-10T13:46:47.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:47 vm09 ceph-mon[53367]: pgmap v625: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 915 B/s rd, 183 B/s wr, 1 op/s 2026-03-10T13:46:48.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:49.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:49 vm05 ceph-mon[58955]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:49.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:49 vm05 ceph-mon[51512]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:49.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:49 vm09 ceph-mon[53367]: pgmap v626: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:49.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:46:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:51 vm05 ceph-mon[58955]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:51 vm05 ceph-mon[51512]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:51 vm09 ceph-mon[53367]: pgmap v627: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:46:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:53 vm05 ceph-mon[58955]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:53 vm05 ceph-mon[51512]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:53 vm09 ceph-mon[53367]: pgmap v628: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:55.250 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:46:54 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T13:46:54.960+0000 7f83ce139640 -1 snap_mapper.add_oid found existing snaps mapped on 297:61e29dab:test-rados-api-vm05-91276-80::foo:2, removing 2026-03-10T13:46:55.250 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 13:46:54 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T13:46:54.961+0000 7f67205f7640 -1 snap_mapper.add_oid found existing snaps mapped on 297:61e29dab:test-rados-api-vm05-91276-80::foo:2, removing 2026-03-10T13:46:55.252 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 13:46:54 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T13:46:54.960+0000 7f5b1e5ea640 -1 snap_mapper.add_oid found existing snaps mapped on 297:61e29dab:test-rados-api-vm05-91276-80::foo:2, removing 2026-03-10T13:46:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[58955]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-79"}]: dispatch 2026-03-10T13:46:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[51512]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-79"}]: dispatch 2026-03-10T13:46:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:55 vm09 ceph-mon[53367]: pgmap v629: 292 pgs: 292 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:46:55.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:55.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-79"}]: dispatch 2026-03-10T13:46:56.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:56 vm05 ceph-mon[58955]: osdmap e422: 8 total, 8 up, 8 in 2026-03-10T13:46:56.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:56 vm05 ceph-mon[58955]: osdmap e423: 8 total, 8 up, 8 in 2026-03-10T13:46:56.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:56 vm05 ceph-mon[51512]: osdmap e422: 8 total, 8 up, 8 in 2026-03-10T13:46:56.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:56 vm05 ceph-mon[51512]: osdmap e423: 8 total, 8 up, 8 in 2026-03-10T13:46:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:56 vm09 ceph-mon[53367]: osdmap e422: 8 total, 8 up, 8 in 2026-03-10T13:46:56.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:56 vm09 ceph-mon[53367]: osdmap e423: 8 total, 8 up, 8 in 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[58955]: pgmap v631: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[58955]: osdmap e424: 8 total, 8 up, 8 in 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:57.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[51512]: pgmap v631: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:46:57.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:57.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:57.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[51512]: osdmap e424: 8 total, 8 up, 8 in 2026-03-10T13:46:57.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:57 vm09 ceph-mon[53367]: pgmap v631: 260 pgs: 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:46:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:46:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:46:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:57 vm09 ceph-mon[53367]: osdmap e424: 8 total, 8 up, 8 in 2026-03-10T13:46:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:46:58.559 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:46:58.559 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:46:58.559 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:58 vm09 ceph-mon[53367]: osdmap e425: 8 total, 8 up, 8 in 2026-03-10T13:46:58.559 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[58955]: osdmap e425: 8 total, 8 up, 8 in 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[51512]: osdmap e425: 8 total, 8 up, 8 in 2026-03-10T13:46:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:46:58.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:46:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:46:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[58955]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[58955]: osdmap e426: 8 total, 8 up, 8 in 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[51512]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[51512]: osdmap e426: 8 total, 8 up, 8 in 2026-03-10T13:46:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:46:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:46:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:59 vm09 ceph-mon[53367]: pgmap v634: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 946 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:46:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:46:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:46:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:59 vm09 ceph-mon[53367]: osdmap e426: 8 total, 8 up, 8 in 2026-03-10T13:46:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:46:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:46:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:46:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[58955]: pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[58955]: osdmap e427: 8 total, 8 up, 8 in 2026-03-10T13:47:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:01.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[51512]: pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[51512]: osdmap e427: 8 total, 8 up, 8 in 2026-03-10T13:47:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:01 vm09 ceph-mon[53367]: pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:01 vm09 ceph-mon[53367]: osdmap e427: 8 total, 8 up, 8 in 2026-03-10T13:47:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[58955]: osdmap e428: 8 total, 8 up, 8 in 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[51512]: osdmap e428: 8 total, 8 up, 8 in 2026-03-10T13:47:02.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T13:47:02.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:02.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:02 vm09 ceph-mon[53367]: osdmap e428: 8 total, 8 up, 8 in 2026-03-10T13:47:02.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[58955]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[58955]: osdmap e429: 8 total, 8 up, 8 in 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[51512]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[51512]: osdmap e429: 8 total, 8 up, 8 in 2026-03-10T13:47:03.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T13:47:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:03 vm09 ceph-mon[53367]: pgmap v640: 292 pgs: 292 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:47:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T13:47:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:03 vm09 ceph-mon[53367]: osdmap e429: 8 total, 8 up, 8 in 2026-03-10T13:47:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[58955]: osdmap e430: 8 total, 8 up, 8 in 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[51512]: osdmap e430: 8 total, 8 up, 8 in 2026-03-10T13:47:04.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T13:47:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:04 vm09 ceph-mon[53367]: osdmap e430: 8 total, 8 up, 8 in 2026-03-10T13:47:04.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[58955]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 11 KiB/s wr, 28 op/s 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[58955]: osdmap e431: 8 total, 8 up, 8 in 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-81"}]: dispatch 2026-03-10T13:47:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[51512]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 11 KiB/s wr, 28 op/s 2026-03-10T13:47:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[51512]: osdmap e431: 8 total, 8 up, 8 in 2026-03-10T13:47:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:05.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-81"}]: dispatch 2026-03-10T13:47:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:05 vm09 ceph-mon[53367]: pgmap v643: 292 pgs: 292 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 11 KiB/s wr, 28 op/s 2026-03-10T13:47:05.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:05.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:05 vm09 ceph-mon[53367]: osdmap e431: 8 total, 8 up, 8 in 2026-03-10T13:47:05.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:05.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-81"}]: dispatch 2026-03-10T13:47:06.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:06 vm05 ceph-mon[58955]: osdmap e432: 8 total, 8 up, 8 in 2026-03-10T13:47:06.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:06 vm05 ceph-mon[51512]: osdmap e432: 8 total, 8 up, 8 in 2026-03-10T13:47:06.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:06 vm09 ceph-mon[53367]: osdmap e432: 8 total, 8 up, 8 in 2026-03-10T13:47:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:07 vm09 ceph-mon[53367]: pgmap v646: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T13:47:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:07 vm09 ceph-mon[53367]: osdmap e433: 8 total, 8 up, 8 in 2026-03-10T13:47:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[58955]: pgmap v646: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T13:47:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[58955]: osdmap e433: 8 total, 8 up, 8 in 2026-03-10T13:47:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[51512]: pgmap v646: 260 pgs: 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 7.2 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T13:47:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[51512]: osdmap e433: 8 total, 8 up, 8 in 2026-03-10T13:47:07.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: osdmap e434: 8 total, 8 up, 8 in 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: osdmap e435: 8 total, 8 up, 8 in 2026-03-10T13:47:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:08.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: osdmap e434: 8 total, 8 up, 8 in 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: osdmap e435: 8 total, 8 up, 8 in 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: osdmap e434: 8 total, 8 up, 8 in 2026-03-10T13:47:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: osdmap e435: 8 total, 8 up, 8 in 2026-03-10T13:47:08.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:09 vm09 ceph-mon[53367]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:09 vm09 ceph-mon[53367]: osdmap e436: 8 total, 8 up, 8 in 2026-03-10T13:47:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[58955]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[58955]: osdmap e436: 8 total, 8 up, 8 in 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[51512]: pgmap v649: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[51512]: osdmap e436: 8 total, 8 up, 8 in 2026-03-10T13:47:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:11 vm09 ceph-mon[53367]: pgmap v652: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:11 vm09 ceph-mon[53367]: osdmap e437: 8 total, 8 up, 8 in 2026-03-10T13:47:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[58955]: pgmap v652: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[58955]: osdmap e437: 8 total, 8 up, 8 in 2026-03-10T13:47:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[51512]: pgmap v652: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[51512]: osdmap e437: 8 total, 8 up, 8 in 2026-03-10T13:47:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:12 vm09 ceph-mon[53367]: osdmap e438: 8 total, 8 up, 8 in 2026-03-10T13:47:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:12 vm05 ceph-mon[58955]: osdmap e438: 8 total, 8 up, 8 in 2026-03-10T13:47:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:12 vm05 ceph-mon[51512]: osdmap e438: 8 total, 8 up, 8 in 2026-03-10T13:47:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:13 vm09 ceph-mon[53367]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:13 vm09 ceph-mon[53367]: osdmap e439: 8 total, 8 up, 8 in 2026-03-10T13:47:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:13 vm05 ceph-mon[58955]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:13 vm05 ceph-mon[58955]: osdmap e439: 8 total, 8 up, 8 in 2026-03-10T13:47:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:13 vm05 ceph-mon[51512]: pgmap v655: 292 pgs: 292 active+clean; 8.3 MiB data, 956 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 3.2 KiB/s wr, 12 op/s 2026-03-10T13:47:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:13 vm05 ceph-mon[51512]: osdmap e439: 8 total, 8 up, 8 in 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[58955]: osdmap e440: 8 total, 8 up, 8 in 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-83"}]: dispatch 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[51512]: osdmap e440: 8 total, 8 up, 8 in 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-83"}]: dispatch 2026-03-10T13:47:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:14 vm09 ceph-mon[53367]: osdmap e440: 8 total, 8 up, 8 in 2026-03-10T13:47:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-83"}]: dispatch 2026-03-10T13:47:15.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:15 vm05 ceph-mon[58955]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-10T13:47:15.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:15 vm05 ceph-mon[58955]: osdmap e441: 8 total, 8 up, 8 in 2026-03-10T13:47:15.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:15 vm05 ceph-mon[51512]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-10T13:47:15.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:15 vm05 ceph-mon[51512]: osdmap e441: 8 total, 8 up, 8 in 2026-03-10T13:47:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:15 vm09 ceph-mon[53367]: pgmap v658: 292 pgs: 292 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 4.0 KiB/s wr, 5 op/s 2026-03-10T13:47:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:15 vm09 ceph-mon[53367]: osdmap e441: 8 total, 8 up, 8 in 2026-03-10T13:47:16.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:16 vm05 ceph-mon[51512]: osdmap e442: 8 total, 8 up, 8 in 2026-03-10T13:47:16.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:16.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:16 vm05 ceph-mon[58955]: osdmap e442: 8 total, 8 up, 8 in 2026-03-10T13:47:16.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:16 vm09 ceph-mon[53367]: osdmap e442: 8 total, 8 up, 8 in 2026-03-10T13:47:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: pgmap v661: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: osdmap e443: 8 total, 8 up, 8 in 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: pgmap v661: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-10T13:47:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:17.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:17.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: osdmap e443: 8 total, 8 up, 8 in 2026-03-10T13:47:17.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:17.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: pgmap v661: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: osdmap e443: 8 total, 8 up, 8 in 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[58955]: osdmap e444: 8 total, 8 up, 8 in 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[51512]: osdmap e444: 8 total, 8 up, 8 in 2026-03-10T13:47:18.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:18.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:18.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:18.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:18 vm09 ceph-mon[53367]: osdmap e444: 8 total, 8 up, 8 in 2026-03-10T13:47:18.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: osdmap e445: 8 total, 8 up, 8 in 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: osdmap e446: 8 total, 8 up, 8 in 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: osdmap e445: 8 total, 8 up, 8 in 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: osdmap e446: 8 total, 8 up, 8 in 2026-03-10T13:47:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: pgmap v664: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 957 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_tier","val": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: osdmap e445: 8 total, 8 up, 8 in 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: osdmap e446: 8 total, 8 up, 8 in 2026-03-10T13:47:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[58955]: pgmap v667: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[58955]: osdmap e447: 8 total, 8 up, 8 in 2026-03-10T13:47:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[51512]: pgmap v667: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:21 vm05 ceph-mon[51512]: osdmap e447: 8 total, 8 up, 8 in 2026-03-10T13:47:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:21 vm09 ceph-mon[53367]: pgmap v667: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:21 vm09 ceph-mon[53367]: osdmap e447: 8 total, 8 up, 8 in 2026-03-10T13:47:22.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:22 vm05 ceph-mon[58955]: osdmap e448: 8 total, 8 up, 8 in 2026-03-10T13:47:22.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:22 vm05 ceph-mon[51512]: osdmap e448: 8 total, 8 up, 8 in 2026-03-10T13:47:22.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:22 vm09 ceph-mon[53367]: osdmap e448: 8 total, 8 up, 8 in 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[58955]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[58955]: osdmap e449: 8 total, 8 up, 8 in 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-85"}]: dispatch 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[51512]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[51512]: osdmap e449: 8 total, 8 up, 8 in 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-85"}]: dispatch 2026-03-10T13:47:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:23 vm09 ceph-mon[53367]: pgmap v670: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T13:47:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:23 vm09 ceph-mon[53367]: osdmap e449: 8 total, 8 up, 8 in 2026-03-10T13:47:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-85"}]: dispatch 2026-03-10T13:47:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:24 vm05 ceph-mon[58955]: osdmap e450: 8 total, 8 up, 8 in 2026-03-10T13:47:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:24 vm05 ceph-mon[51512]: osdmap e450: 8 total, 8 up, 8 in 2026-03-10T13:47:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:24 vm09 ceph-mon[53367]: osdmap e450: 8 total, 8 up, 8 in 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (18268 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (22945 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (3041 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3213 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3014 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (3073 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4033 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (37353 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (16787 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (4028 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3012 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (24194 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (14172 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5029 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4170 ms) 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T13:47:25.580 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24376 ms) 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10062 ms) 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9127 ms) 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9056 ms) 2026-03-10T13:47:25.581 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-10T13:47:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:25 vm09 ceph-mon[53367]: pgmap v673: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:25 vm09 ceph-mon[53367]: osdmap e451: 8 total, 8 up, 8 in 2026-03-10T13:47:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[58955]: pgmap v673: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[58955]: osdmap e451: 8 total, 8 up, 8 in 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[51512]: pgmap v673: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[51512]: osdmap e451: 8 total, 8 up, 8 in 2026-03-10T13:47:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: osdmap e452: 8 total, 8 up, 8 in 2026-03-10T13:47:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: osdmap e453: 8 total, 8 up, 8 in 2026-03-10T13:47:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: osdmap e452: 8 total, 8 up, 8 in 2026-03-10T13:47:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: osdmap e453: 8 total, 8 up, 8 in 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: osdmap e452: 8 total, 8 up, 8 in 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: osdmap e453: 8 total, 8 up, 8 in 2026-03-10T13:47:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:27 vm09 ceph-mon[53367]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:27 vm09 ceph-mon[53367]: osdmap e454: 8 total, 8 up, 8 in 2026-03-10T13:47:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]: dispatch 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[58955]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[58955]: osdmap e454: 8 total, 8 up, 8 in 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]: dispatch 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[51512]: pgmap v676: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[51512]: osdmap e454: 8 total, 8 up, 8 in 2026-03-10T13:47:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]: dispatch 2026-03-10T13:47:28.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:29 vm09 ceph-mon[53367]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]': finished 2026-03-10T13:47:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:29 vm09 ceph-mon[53367]: osdmap e455: 8 total, 8 up, 8 in 2026-03-10T13:47:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:29.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[58955]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]': finished 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[58955]: osdmap e455: 8 total, 8 up, 8 in 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[51512]: pgmap v679: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_tier","val": "test-rados-api-vm05-91276-89-test-flush"}]': finished 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[51512]: osdmap e455: 8 total, 8 up, 8 in 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:47:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:30.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:30 vm09 ceph-mon[53367]: osdmap e456: 8 total, 8 up, 8 in 2026-03-10T13:47:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:30 vm09 ceph-mon[53367]: osdmap e457: 8 total, 8 up, 8 in 2026-03-10T13:47:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[58955]: osdmap e456: 8 total, 8 up, 8 in 2026-03-10T13:47:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[58955]: osdmap e457: 8 total, 8 up, 8 in 2026-03-10T13:47:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:47:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[51512]: osdmap e456: 8 total, 8 up, 8 in 2026-03-10T13:47:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:47:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:47:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:30 vm05 ceph-mon[51512]: osdmap e457: 8 total, 8 up, 8 in 2026-03-10T13:47:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:31 vm09 ceph-mon[53367]: pgmap v682: 324 pgs: 324 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:31 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:31 vm05 ceph-mon[58955]: pgmap v682: 324 pgs: 324 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:31 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:31 vm05 ceph-mon[51512]: pgmap v682: 324 pgs: 324 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:31 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:32.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:32 vm09 ceph-mon[53367]: osdmap e458: 8 total, 8 up, 8 in 2026-03-10T13:47:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-87"}]: dispatch 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[58955]: osdmap e458: 8 total, 8 up, 8 in 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-87"}]: dispatch 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[51512]: osdmap e458: 8 total, 8 up, 8 in 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:33.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-87"}]: dispatch 2026-03-10T13:47:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:33 vm09 ceph-mon[53367]: pgmap v685: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:33 vm09 ceph-mon[53367]: osdmap e459: 8 total, 8 up, 8 in 2026-03-10T13:47:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:33 vm05 ceph-mon[58955]: pgmap v685: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:33 vm05 ceph-mon[58955]: osdmap e459: 8 total, 8 up, 8 in 2026-03-10T13:47:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:33 vm05 ceph-mon[51512]: pgmap v685: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:47:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:33 vm05 ceph-mon[51512]: osdmap e459: 8 total, 8 up, 8 in 2026-03-10T13:47:34.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:34 vm09 ceph-mon[53367]: osdmap e460: 8 total, 8 up, 8 in 2026-03-10T13:47:34.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:35.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:34 vm05 ceph-mon[58955]: osdmap e460: 8 total, 8 up, 8 in 2026-03-10T13:47:35.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:35.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:34 vm05 ceph-mon[51512]: osdmap e460: 8 total, 8 up, 8 in 2026-03-10T13:47:35.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:35 vm09 ceph-mon[53367]: pgmap v688: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:35 vm09 ceph-mon[53367]: osdmap e461: 8 total, 8 up, 8 in 2026-03-10T13:47:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[58955]: pgmap v688: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[58955]: osdmap e461: 8 total, 8 up, 8 in 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[51512]: pgmap v688: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[51512]: osdmap e461: 8 total, 8 up, 8 in 2026-03-10T13:47:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:47:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:36 vm09 ceph-mon[53367]: osdmap e462: 8 total, 8 up, 8 in 2026-03-10T13:47:37.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:37.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:36 vm05 ceph-mon[58955]: osdmap e462: 8 total, 8 up, 8 in 2026-03-10T13:47:37.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:47:37.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:36 vm05 ceph-mon[51512]: osdmap e462: 8 total, 8 up, 8 in 2026-03-10T13:47:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:37 vm09 ceph-mon[53367]: pgmap v691: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:37 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:37 vm09 ceph-mon[53367]: osdmap e463: 8 total, 8 up, 8 in 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[58955]: pgmap v691: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[58955]: osdmap e463: 8 total, 8 up, 8 in 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[51512]: pgmap v691: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:37 vm05 ceph-mon[51512]: osdmap e463: 8 total, 8 up, 8 in 2026-03-10T13:47:38.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:38 vm09 ceph-mon[53367]: osdmap e464: 8 total, 8 up, 8 in 2026-03-10T13:47:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-90"}]: dispatch 2026-03-10T13:47:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[58955]: osdmap e464: 8 total, 8 up, 8 in 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-90"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[51512]: osdmap e464: 8 total, 8 up, 8 in 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-90"}]: dispatch 2026-03-10T13:47:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:39.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[58955]: pgmap v694: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[58955]: osdmap e465: 8 total, 8 up, 8 in 2026-03-10T13:47:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[51512]: pgmap v694: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:39 vm05 ceph-mon[51512]: osdmap e465: 8 total, 8 up, 8 in 2026-03-10T13:47:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:39 vm09 ceph-mon[53367]: pgmap v694: 292 pgs: 11 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 947 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:39 vm09 ceph-mon[53367]: osdmap e465: 8 total, 8 up, 8 in 2026-03-10T13:47:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:41.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:40 vm09 ceph-mon[53367]: osdmap e466: 8 total, 8 up, 8 in 2026-03-10T13:47:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:40 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[58955]: osdmap e466: 8 total, 8 up, 8 in 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[51512]: osdmap e466: 8 total, 8 up, 8 in 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:40 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: pgmap v697: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: osdmap e467: 8 total, 8 up, 8 in 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: pgmap v697: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: osdmap e467: 8 total, 8 up, 8 in 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: pgmap v697: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: osdmap e467: 8 total, 8 up, 8 in 2026-03-10T13:47:42.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:42.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:42.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:47:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:42 vm09 ceph-mon[53367]: osdmap e468: 8 total, 8 up, 8 in 2026-03-10T13:47:43.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:42 vm05 ceph-mon[58955]: osdmap e468: 8 total, 8 up, 8 in 2026-03-10T13:47:43.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:42 vm05 ceph-mon[51512]: osdmap e468: 8 total, 8 up, 8 in 2026-03-10T13:47:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:43 vm09 ceph-mon[53367]: pgmap v700: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:43 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:43 vm09 ceph-mon[53367]: osdmap e469: 8 total, 8 up, 8 in 2026-03-10T13:47:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-92"}]: dispatch 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[58955]: pgmap v700: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[58955]: osdmap e469: 8 total, 8 up, 8 in 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-92"}]: dispatch 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[51512]: pgmap v700: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 948 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[51512]: osdmap e469: 8 total, 8 up, 8 in 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:47:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-92"}]: dispatch 2026-03-10T13:47:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:44 vm09 ceph-mon[53367]: osdmap e470: 8 total, 8 up, 8 in 2026-03-10T13:47:45.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:44 vm05 ceph-mon[58955]: osdmap e470: 8 total, 8 up, 8 in 2026-03-10T13:47:45.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:44 vm05 ceph-mon[51512]: osdmap e470: 8 total, 8 up, 8 in 2026-03-10T13:47:46.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:45 vm09 ceph-mon[53367]: pgmap v703: 260 pgs: 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:45 vm09 ceph-mon[53367]: osdmap e471: 8 total, 8 up, 8 in 2026-03-10T13:47:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[58955]: pgmap v703: 260 pgs: 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[58955]: osdmap e471: 8 total, 8 up, 8 in 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[51512]: pgmap v703: 260 pgs: 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[51512]: osdmap e471: 8 total, 8 up, 8 in 2026-03-10T13:47:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:47:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:46 vm09 ceph-mon[53367]: osdmap e472: 8 total, 8 up, 8 in 2026-03-10T13:47:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[58955]: osdmap e472: 8 total, 8 up, 8 in 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[51512]: osdmap e472: 8 total, 8 up, 8 in 2026-03-10T13:47:47.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:47:48.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:47 vm09 ceph-mon[53367]: pgmap v706: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:48.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:47 vm09 ceph-mon[53367]: osdmap e473: 8 total, 8 up, 8 in 2026-03-10T13:47:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:47 vm05 ceph-mon[58955]: pgmap v706: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:47 vm05 ceph-mon[58955]: osdmap e473: 8 total, 8 up, 8 in 2026-03-10T13:47:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:47 vm05 ceph-mon[51512]: pgmap v706: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:47:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:47 vm05 ceph-mon[51512]: osdmap e473: 8 total, 8 up, 8 in 2026-03-10T13:47:48.912 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:49.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:48 vm09 ceph-mon[53367]: osdmap e474: 8 total, 8 up, 8 in 2026-03-10T13:47:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:48 vm05 ceph-mon[58955]: osdmap e474: 8 total, 8 up, 8 in 2026-03-10T13:47:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:48 vm05 ceph-mon[51512]: osdmap e474: 8 total, 8 up, 8 in 2026-03-10T13:47:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:49 vm05 ceph-mon[58955]: pgmap v709: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:49 vm05 ceph-mon[51512]: pgmap v709: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:47:50.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:49 vm09 ceph-mon[53367]: pgmap v709: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:47:50.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:52 vm05 ceph-mon[58955]: pgmap v710: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:47:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:52 vm05 ceph-mon[51512]: pgmap v710: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:47:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:52 vm09 ceph-mon[53367]: pgmap v710: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-10T13:47:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:53 vm05 ceph-mon[58955]: pgmap v711: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:47:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:53 vm05 ceph-mon[51512]: pgmap v711: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:47:53.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:53 vm09 ceph-mon[53367]: pgmap v711: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T13:47:53.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:55 vm05 ceph-mon[58955]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:47:55.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:55 vm05 ceph-mon[51512]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:47:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:55 vm09 ceph-mon[53367]: pgmap v712: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 4 op/s 2026-03-10T13:47:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:57 vm05 ceph-mon[58955]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 887 B/s wr, 3 op/s 2026-03-10T13:47:57.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:57 vm05 ceph-mon[51512]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 887 B/s wr, 3 op/s 2026-03-10T13:47:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:57 vm09 ceph-mon[53367]: pgmap v713: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 887 B/s wr, 3 op/s 2026-03-10T13:47:58.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:47:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:47:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[58955]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 801 B/s wr, 3 op/s 2026-03-10T13:47:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[58955]: osdmap e475: 8 total, 8 up, 8 in 2026-03-10T13:47:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[51512]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 801 B/s wr, 3 op/s 2026-03-10T13:47:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[51512]: osdmap e475: 8 total, 8 up, 8 in 2026-03-10T13:47:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:47:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:47:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:59 vm09 ceph-mon[53367]: pgmap v714: 292 pgs: 292 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 801 B/s wr, 3 op/s 2026-03-10T13:47:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:59 vm09 ceph-mon[53367]: osdmap e475: 8 total, 8 up, 8 in 2026-03-10T13:47:59.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:47:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:47:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:47:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:01 vm05 ceph-mon[58955]: pgmap v716: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:01.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:01 vm05 ceph-mon[51512]: pgmap v716: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:01.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:01 vm09 ceph-mon[53367]: pgmap v716: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:03 vm05 ceph-mon[58955]: pgmap v717: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:03.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:03 vm05 ceph-mon[51512]: pgmap v717: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:03 vm09 ceph-mon[53367]: pgmap v717: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:05 vm05 ceph-mon[58955]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:05 vm05 ceph-mon[51512]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:05 vm09 ceph-mon[53367]: pgmap v718: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:07 vm05 ceph-mon[58955]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:07 vm05 ceph-mon[58955]: osdmap e476: 8 total, 8 up, 8 in 2026-03-10T13:48:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:07 vm05 ceph-mon[51512]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:07 vm05 ceph-mon[51512]: osdmap e476: 8 total, 8 up, 8 in 2026-03-10T13:48:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:07 vm09 ceph-mon[53367]: pgmap v719: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s 2026-03-10T13:48:07.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:07 vm09 ceph-mon[53367]: osdmap e476: 8 total, 8 up, 8 in 2026-03-10T13:48:08.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:08.626 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:08.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[58955]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-94"}]: dispatch 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[51512]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-94"}]: dispatch 2026-03-10T13:48:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:09 vm09 ceph-mon[53367]: pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s 2026-03-10T13:48:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-94"}]: dispatch 2026-03-10T13:48:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:10.261 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:10.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[58955]: osdmap e477: 8 total, 8 up, 8 in 2026-03-10T13:48:10.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[58955]: osdmap e478: 8 total, 8 up, 8 in 2026-03-10T13:48:10.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:10.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[51512]: osdmap e477: 8 total, 8 up, 8 in 2026-03-10T13:48:10.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[51512]: osdmap e478: 8 total, 8 up, 8 in 2026-03-10T13:48:10.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:10 vm09 ceph-mon[53367]: osdmap e477: 8 total, 8 up, 8 in 2026-03-10T13:48:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:10 vm09 ceph-mon[53367]: osdmap e478: 8 total, 8 up, 8 in 2026-03-10T13:48:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:11 vm05 ceph-mon[58955]: pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:11 vm05 ceph-mon[51512]: pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:11 vm09 ceph-mon[53367]: pgmap v723: 260 pgs: 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:12 vm09 ceph-mon[53367]: osdmap e479: 8 total, 8 up, 8 in 2026-03-10T13:48:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:12 vm09 ceph-mon[53367]: osdmap e480: 8 total, 8 up, 8 in 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[58955]: osdmap e479: 8 total, 8 up, 8 in 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[58955]: osdmap e480: 8 total, 8 up, 8 in 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[51512]: osdmap e479: 8 total, 8 up, 8 in 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:12 vm05 ceph-mon[51512]: osdmap e480: 8 total, 8 up, 8 in 2026-03-10T13:48:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:13 vm09 ceph-mon[53367]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:13 vm05 ceph-mon[58955]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:13 vm05 ceph-mon[51512]: pgmap v726: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:15.633 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:15 vm05 ceph-mon[58955]: pgmap v728: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T13:48:15.634 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:15 vm05 ceph-mon[51512]: pgmap v728: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T13:48:15.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:15 vm09 ceph-mon[53367]: pgmap v728: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T13:48:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:17 vm09 ceph-mon[53367]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:48:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:17 vm05 ceph-mon[58955]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:48:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:17 vm05 ceph-mon[51512]: pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T13:48:18.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:19 vm09 ceph-mon[53367]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 649 B/s rd, 649 B/s wr, 2 op/s 2026-03-10T13:48:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:19 vm05 ceph-mon[58955]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 649 B/s rd, 649 B/s wr, 2 op/s 2026-03-10T13:48:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:19 vm05 ceph-mon[51512]: pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 649 B/s rd, 649 B/s wr, 2 op/s 2026-03-10T13:48:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:21 vm09 ceph-mon[53367]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 579 B/s wr, 2 op/s 2026-03-10T13:48:21.674 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:48:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=cleanup t=2026-03-10T13:48:21.516622492Z level=info msg="Completed cleanup jobs" duration=1.835745ms 2026-03-10T13:48:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:21 vm05 ceph-mon[58955]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 579 B/s wr, 2 op/s 2026-03-10T13:48:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:21 vm05 ceph-mon[51512]: pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 579 B/s wr, 2 op/s 2026-03-10T13:48:22.174 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:48:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugins.update.checker t=2026-03-10T13:48:21.692602433Z level=info msg="Update check succeeded" duration=55.669673ms 2026-03-10T13:48:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-96"}]: dispatch 2026-03-10T13:48:22.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 13:48:22 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T13:48:22.361+0000 7f4608744640 -1 snap_mapper.add_oid found existing snaps mapped on 100:f3505a79:test-rados-api-vm05-91276-97::foo:21, removing 2026-03-10T13:48:22.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 13:48:22 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T13:48:22.360+0000 7f6b12932640 -1 snap_mapper.add_oid found existing snaps mapped on 100:f3505a79:test-rados-api-vm05-91276-97::foo:21, removing 2026-03-10T13:48:22.831 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 13:48:22 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T13:48:22.360+0000 7f83cf13b640 -1 snap_mapper.add_oid found existing snaps mapped on 100:f3505a79:test-rados-api-vm05-91276-97::foo:21, removing 2026-03-10T13:48:22.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:22.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-96"}]: dispatch 2026-03-10T13:48:22.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:22.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-96"}]: dispatch 2026-03-10T13:48:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:23 vm09 ceph-mon[53367]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:48:23.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:23 vm05 ceph-mon[58955]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:48:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:23 vm05 ceph-mon[51512]: pgmap v732: 292 pgs: 292 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:48:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:24 vm09 ceph-mon[53367]: osdmap e481: 8 total, 8 up, 8 in 2026-03-10T13:48:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:24 vm09 ceph-mon[53367]: osdmap e482: 8 total, 8 up, 8 in 2026-03-10T13:48:24.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[58955]: osdmap e481: 8 total, 8 up, 8 in 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[58955]: osdmap e482: 8 total, 8 up, 8 in 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[51512]: osdmap e481: 8 total, 8 up, 8 in 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[51512]: osdmap e482: 8 total, 8 up, 8 in 2026-03-10T13:48:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:25 vm09 ceph-mon[53367]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T13:48:25.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:25.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:25 vm09 ceph-mon[53367]: osdmap e483: 8 total, 8 up, 8 in 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[58955]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[58955]: osdmap e483: 8 total, 8 up, 8 in 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[51512]: pgmap v734: 260 pgs: 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 op/s 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:25 vm05 ceph-mon[51512]: osdmap e483: 8 total, 8 up, 8 in 2026-03-10T13:48:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:26 vm09 ceph-mon[53367]: osdmap e484: 8 total, 8 up, 8 in 2026-03-10T13:48:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]: dispatch 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[58955]: osdmap e484: 8 total, 8 up, 8 in 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]: dispatch 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[51512]: osdmap e484: 8 total, 8 up, 8 in 2026-03-10T13:48:26.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]: dispatch 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[58955]: pgmap v737: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]': finished 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[58955]: osdmap e485: 8 total, 8 up, 8 in 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[51512]: pgmap v737: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]': finished 2026-03-10T13:48:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[51512]: osdmap e485: 8 total, 8 up, 8 in 2026-03-10T13:48:27.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:27 vm09 ceph-mon[53367]: pgmap v737: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:48:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:27 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-98", "mode": "writeback"}]': finished 2026-03-10T13:48:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:27 vm09 ceph-mon[53367]: osdmap e485: 8 total, 8 up, 8 in 2026-03-10T13:48:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:28.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:29 vm09 ceph-mon[53367]: pgmap v740: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:29 vm09 ceph-mon[53367]: osdmap e486: 8 total, 8 up, 8 in 2026-03-10T13:48:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:48:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[58955]: pgmap v740: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[58955]: osdmap e486: 8 total, 8 up, 8 in 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[51512]: pgmap v740: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 958 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[51512]: osdmap e486: 8 total, 8 up, 8 in 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:48:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: osdmap e487: 8 total, 8 up, 8 in 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: osdmap e488: 8 total, 8 up, 8 in 2026-03-10T13:48:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: osdmap e487: 8 total, 8 up, 8 in 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: osdmap e488: 8 total, 8 up, 8 in 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:30.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: osdmap e487: 8 total, 8 up, 8 in 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: osdmap e488: 8 total, 8 up, 8 in 2026-03-10T13:48:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:31 vm09 ceph-mon[53367]: pgmap v743: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:31 vm09 ceph-mon[53367]: osdmap e489: 8 total, 8 up, 8 in 2026-03-10T13:48:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[58955]: pgmap v743: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[58955]: osdmap e489: 8 total, 8 up, 8 in 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[51512]: pgmap v743: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[51512]: osdmap e489: 8 total, 8 up, 8 in 2026-03-10T13:48:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T13:48:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:33 vm09 ceph-mon[53367]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T13:48:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:33 vm09 ceph-mon[53367]: osdmap e490: 8 total, 8 up, 8 in 2026-03-10T13:48:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[58955]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[58955]: osdmap e490: 8 total, 8 up, 8 in 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[51512]: pgmap v746: 292 pgs: 292 active+clean; 8.3 MiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[51512]: osdmap e490: 8 total, 8 up, 8 in 2026-03-10T13:48:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[58955]: osdmap e491: 8 total, 8 up, 8 in 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[51512]: osdmap e491: 8 total, 8 up, 8 in 2026-03-10T13:48:34.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:34 vm09 ceph-mon[53367]: osdmap e491: 8 total, 8 up, 8 in 2026-03-10T13:48:34.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]: dispatch 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[58955]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[58955]: osdmap e492: 8 total, 8 up, 8 in 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[51512]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:35 vm05 ceph-mon[51512]: osdmap e492: 8 total, 8 up, 8 in 2026-03-10T13:48:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:35 vm09 ceph-mon[53367]: pgmap v749: 292 pgs: 292 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-98"}]': finished 2026-03-10T13:48:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:35 vm09 ceph-mon[53367]: osdmap e492: 8 total, 8 up, 8 in 2026-03-10T13:48:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:36 vm05 ceph-mon[58955]: osdmap e493: 8 total, 8 up, 8 in 2026-03-10T13:48:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:36 vm05 ceph-mon[58955]: osdmap e494: 8 total, 8 up, 8 in 2026-03-10T13:48:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:36 vm05 ceph-mon[51512]: osdmap e493: 8 total, 8 up, 8 in 2026-03-10T13:48:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:36 vm05 ceph-mon[51512]: osdmap e494: 8 total, 8 up, 8 in 2026-03-10T13:48:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:36 vm09 ceph-mon[53367]: osdmap e493: 8 total, 8 up, 8 in 2026-03-10T13:48:36.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:36 vm09 ceph-mon[53367]: osdmap e494: 8 total, 8 up, 8 in 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[58955]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[58955]: osdmap e495: 8 total, 8 up, 8 in 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[51512]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:37 vm05 ceph-mon[51512]: osdmap e495: 8 total, 8 up, 8 in 2026-03-10T13:48:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:37 vm09 ceph-mon[53367]: pgmap v752: 260 pgs: 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:37 vm09 ceph-mon[53367]: osdmap e495: 8 total, 8 up, 8 in 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[58955]: osdmap e496: 8 total, 8 up, 8 in 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:38.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[51512]: osdmap e496: 8 total, 8 up, 8 in 2026-03-10T13:48:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:38.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:48:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:48:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:38 vm09 ceph-mon[53367]: osdmap e496: 8 total, 8 up, 8 in 2026-03-10T13:48:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:38.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[58955]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[58955]: osdmap e497: 8 total, 8 up, 8 in 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]: dispatch 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[51512]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[51512]: osdmap e497: 8 total, 8 up, 8 in 2026-03-10T13:48:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]: dispatch 2026-03-10T13:48:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:39 vm09 ceph-mon[53367]: pgmap v755: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-6", "overlaypool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:39 vm09 ceph-mon[53367]: osdmap e497: 8 total, 8 up, 8 in 2026-03-10T13:48:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]: dispatch 2026-03-10T13:48:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]': finished 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[58955]: osdmap e498: 8 total, 8 up, 8 in 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]': finished 2026-03-10T13:48:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:40 vm05 ceph-mon[51512]: osdmap e498: 8 total, 8 up, 8 in 2026-03-10T13:48:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:40 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:48:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-100", "mode": "writeback"}]': finished 2026-03-10T13:48:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:40 vm09 ceph-mon[53367]: osdmap e498: 8 total, 8 up, 8 in 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: pgmap v758: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: osdmap e499: 8 total, 8 up, 8 in 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: pgmap v758: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: osdmap e499: 8 total, 8 up, 8 in 2026-03-10T13:48:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: pgmap v758: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: osdmap e499: 8 total, 8 up, 8 in 2026-03-10T13:48:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[58955]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[58955]: osdmap e500: 8 total, 8 up, 8 in 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[51512]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[51512]: osdmap e500: 8 total, 8 up, 8 in 2026-03-10T13:48:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:48:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:43 vm09 ceph-mon[53367]: pgmap v761: 292 pgs: 292 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:48:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:43 vm09 ceph-mon[53367]: osdmap e500: 8 total, 8 up, 8 in 2026-03-10T13:48:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: osdmap e501: 8 total, 8 up, 8 in 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[58955]: osdmap e502: 8 total, 8 up, 8 in 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: osdmap e501: 8 total, 8 up, 8 in 2026-03-10T13:48:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T13:48:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T13:48:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:44 vm05 ceph-mon[51512]: osdmap e502: 8 total, 8 up, 8 in 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: osdmap e501: 8 total, 8 up, 8 in 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T13:48:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:44 vm09 ceph-mon[53367]: osdmap e502: 8 total, 8 up, 8 in 2026-03-10T13:48:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:45 vm09 ceph-mon[53367]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:45 vm05 ceph-mon[58955]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:45 vm05 ceph-mon[51512]: pgmap v764: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:46.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:46 vm09 ceph-mon[53367]: osdmap e503: 8 total, 8 up, 8 in 2026-03-10T13:48:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[58955]: osdmap e503: 8 total, 8 up, 8 in 2026-03-10T13:48:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:47.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]': finished 2026-03-10T13:48:47.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[51512]: osdmap e503: 8 total, 8 up, 8 in 2026-03-10T13:48:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]: dispatch 2026-03-10T13:48:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:47 vm09 ceph-mon[53367]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:47 vm09 ceph-mon[53367]: osdmap e504: 8 total, 8 up, 8 in 2026-03-10T13:48:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[58955]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[58955]: osdmap e504: 8 total, 8 up, 8 in 2026-03-10T13:48:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[51512]: pgmap v767: 292 pgs: 292 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:48:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-100"}]': finished 2026-03-10T13:48:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:47 vm05 ceph-mon[51512]: osdmap e504: 8 total, 8 up, 8 in 2026-03-10T13:48:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:48 vm09 ceph-mon[53367]: osdmap e505: 8 total, 8 up, 8 in 2026-03-10T13:48:48.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:49.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:48 vm05 ceph-mon[58955]: osdmap e505: 8 total, 8 up, 8 in 2026-03-10T13:48:49.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:48 vm05 ceph-mon[51512]: osdmap e505: 8 total, 8 up, 8 in 2026-03-10T13:48:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[58955]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[58955]: osdmap e506: 8 total, 8 up, 8 in 2026-03-10T13:48:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[51512]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[51512]: osdmap e506: 8 total, 8 up, 8 in 2026-03-10T13:48:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:50.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:48:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:49 vm09 ceph-mon[53367]: pgmap v770: 260 pgs: 260 active+clean; 8.3 MiB data, 979 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:48:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:49 vm09 ceph-mon[53367]: osdmap e506: 8 total, 8 up, 8 in 2026-03-10T13:48:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: osdmap e507: 8 total, 8 up, 8 in 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: osdmap e508: 8 total, 8 up, 8 in 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:48:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: osdmap e507: 8 total, 8 up, 8 in 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: osdmap e508: 8 total, 8 up, 8 in 2026-03-10T13:48:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: osdmap e507: 8 total, 8 up, 8 in 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: osdmap e508: 8 total, 8 up, 8 in 2026-03-10T13:48:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T13:48:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[58955]: pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:48:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[58955]: osdmap e509: 8 total, 8 up, 8 in 2026-03-10T13:48:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:48:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[51512]: pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:48:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[51512]: osdmap e509: 8 total, 8 up, 8 in 2026-03-10T13:48:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:48:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:51 vm09 ceph-mon[53367]: pgmap v773: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T13:48:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:51 vm09 ceph-mon[53367]: osdmap e509: 8 total, 8 up, 8 in 2026-03-10T13:48:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: pgmap v776: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: osdmap e510: 8 total, 8 up, 8 in 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-102"}]: dispatch 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: pgmap v776: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:48:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: osdmap e510: 8 total, 8 up, 8 in 2026-03-10T13:48:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-102"}]: dispatch 2026-03-10T13:48:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: pgmap v776: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 1014 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: osdmap e510: 8 total, 8 up, 8 in 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-6"}]: dispatch 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2118521834' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-6", "tierpool": "test-rados-api-vm05-91276-102"}]: dispatch 2026-03-10T13:48:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:54 vm05 ceph-mon[58955]: osdmap e511: 8 total, 8 up, 8 in 2026-03-10T13:48:55.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:54 vm05 ceph-mon[51512]: osdmap e511: 8 total, 8 up, 8 in 2026-03-10T13:48:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:54 vm09 ceph-mon[53367]: osdmap e511: 8 total, 8 up, 8 in 2026-03-10T13:48:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:48:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: osdmap e512: 8 total, 8 up, 8 in 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: osdmap e512: 8 total, 8 up, 8 in 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:55 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: pgmap v779: 260 pgs: 260 active+clean; 8.3 MiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: osdmap e512: 8 total, 8 up, 8 in 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:55 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:48:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:48:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[58955]: osdmap e513: 8 total, 8 up, 8 in 2026-03-10T13:48:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:57.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:48:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[51512]: osdmap e513: 8 total, 8 up, 8 in 2026-03-10T13:48:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:56 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:56 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm05-91276-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:48:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:56 vm09 ceph-mon[53367]: osdmap e513: 8 total, 8 up, 8 in 2026-03-10T13:48:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:56 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:48:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:57 vm09 ceph-mon[53367]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:57 vm09 ceph-mon[53367]: osdmap e514: 8 total, 8 up, 8 in 2026-03-10T13:48:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:57 vm05 ceph-mon[58955]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:57 vm05 ceph-mon[58955]: osdmap e514: 8 total, 8 up, 8 in 2026-03-10T13:48:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:57 vm05 ceph-mon[51512]: pgmap v782: 228 pgs: 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:48:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:57 vm05 ceph-mon[51512]: osdmap e514: 8 total, 8 up, 8 in 2026-03-10T13:48:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:58 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:48:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:58 vm09 ceph-mon[53367]: osdmap e515: 8 total, 8 up, 8 in 2026-03-10T13:48:59.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:48:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:48:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:58 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:48:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:58 vm05 ceph-mon[58955]: osdmap e515: 8 total, 8 up, 8 in 2026-03-10T13:48:59.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:58 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm05-91276-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:48:59.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:58 vm05 ceph-mon[51512]: osdmap e515: 8 total, 8 up, 8 in 2026-03-10T13:49:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:59 vm09 ceph-mon[53367]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:59 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:48:59 vm09 ceph-mon[53367]: osdmap e516: 8 total, 8 up, 8 in 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[58955]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[58955]: osdmap e516: 8 total, 8 up, 8 in 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[51512]: pgmap v785: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:48:59 vm05 ceph-mon[51512]: osdmap e516: 8 total, 8 up, 8 in 2026-03-10T13:49:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:48:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:48:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:00 vm09 ceph-mon[53367]: osdmap e517: 8 total, 8 up, 8 in 2026-03-10T13:49:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:00 vm05 ceph-mon[58955]: osdmap e517: 8 total, 8 up, 8 in 2026-03-10T13:49:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:00 vm05 ceph-mon[51512]: osdmap e517: 8 total, 8 up, 8 in 2026-03-10T13:49:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:01 vm09 ceph-mon[53367]: pgmap v788: 268 pgs: 32 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:01 vm09 ceph-mon[53367]: osdmap e518: 8 total, 8 up, 8 in 2026-03-10T13:49:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:01 vm09 ceph-mon[53367]: osdmap e519: 8 total, 8 up, 8 in 2026-03-10T13:49:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[58955]: pgmap v788: 268 pgs: 32 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[58955]: osdmap e518: 8 total, 8 up, 8 in 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[58955]: osdmap e519: 8 total, 8 up, 8 in 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[51512]: pgmap v788: 268 pgs: 32 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[51512]: osdmap e518: 8 total, 8 up, 8 in 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[51512]: osdmap e519: 8 total, 8 up, 8 in 2026-03-10T13:49:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[58955]: pgmap v791: 300 pgs: 64 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[58955]: osdmap e520: 8 total, 8 up, 8 in 2026-03-10T13:49:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[51512]: pgmap v791: 300 pgs: 64 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:04.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:04.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[51512]: osdmap e520: 8 total, 8 up, 8 in 2026-03-10T13:49:04.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:03 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:03 vm09 ceph-mon[53367]: pgmap v791: 300 pgs: 64 unknown, 8 creating+peering, 228 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:03 vm09 ceph-mon[53367]: osdmap e520: 8 total, 8 up, 8 in 2026-03-10T13:49:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:03 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: osdmap e521: 8 total, 8 up, 8 in 2026-03-10T13:49:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:05.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: osdmap e522: 8 total, 8 up, 8 in 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: osdmap e521: 8 total, 8 up, 8 in 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: osdmap e522: 8 total, 8 up, 8 in 2026-03-10T13:49:05.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:04 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:05.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: osdmap e521: 8 total, 8 up, 8 in 2026-03-10T13:49:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-107", "overlaypool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: osdmap e522: 8 total, 8 up, 8 in 2026-03-10T13:49:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:04 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[58955]: pgmap v794: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[58955]: osdmap e523: 8 total, 8 up, 8 in 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[51512]: pgmap v794: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:05 vm05 ceph-mon[51512]: osdmap e523: 8 total, 8 up, 8 in 2026-03-10T13:49:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:05 vm09 ceph-mon[53367]: pgmap v794: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:05 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:05 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:05 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-107-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:05 vm09 ceph-mon[53367]: osdmap e523: 8 total, 8 up, 8 in 2026-03-10T13:49:07.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:06 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]: dispatch 2026-03-10T13:49:07.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:06 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]: dispatch 2026-03-10T13:49:07.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:06 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]: dispatch 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]': finished 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: osdmap e524: 8 total, 8 up, 8 in 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]': finished 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: osdmap e524: 8 total, 8 up, 8 in 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:07 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: pgmap v797: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:49:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-107"}]': finished 2026-03-10T13:49:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: osdmap e524: 8 total, 8 up, 8 in 2026-03-10T13:49:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]: dispatch 2026-03-10T13:49:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:07 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:08.989 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[58955]: osdmap e525: 8 total, 8 up, 8 in 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[58955]: osdmap e526: 8 total, 8 up, 8 in 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[51512]: osdmap e525: 8 total, 8 up, 8 in 2026-03-10T13:49:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:08 vm05 ceph-mon[51512]: osdmap e526: 8 total, 8 up, 8 in 2026-03-10T13:49:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:08 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4071998363' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-107", "tierpool": "test-rados-api-vm05-91276-107-cache"}]': finished 2026-03-10T13:49:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:08 vm09 ceph-mon[53367]: osdmap e525: 8 total, 8 up, 8 in 2026-03-10T13:49:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:08 vm09 ceph-mon[53367]: osdmap e526: 8 total, 8 up, 8 in 2026-03-10T13:49:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[58955]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[58955]: osdmap e527: 8 total, 8 up, 8 in 2026-03-10T13:49:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[51512]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:09 vm05 ceph-mon[51512]: osdmap e527: 8 total, 8 up, 8 in 2026-03-10T13:49:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:09 vm09 ceph-mon[53367]: pgmap v800: 300 pgs: 300 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:09 vm09 ceph-mon[53367]: osdmap e527: 8 total, 8 up, 8 in 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[58955]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[58955]: osdmap e528: 8 total, 8 up, 8 in 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[51512]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[51512]: osdmap e528: 8 total, 8 up, 8 in 2026-03-10T13:49:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:12 vm09 ceph-mon[53367]: pgmap v803: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:12 vm09 ceph-mon[53367]: osdmap e528: 8 total, 8 up, 8 in 2026-03-10T13:49:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:13 vm05 ceph-mon[58955]: osdmap e529: 8 total, 8 up, 8 in 2026-03-10T13:49:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:13 vm05 ceph-mon[58955]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:13 vm05 ceph-mon[51512]: osdmap e529: 8 total, 8 up, 8 in 2026-03-10T13:49:13.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:13 vm05 ceph-mon[51512]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:13 vm09 ceph-mon[53367]: osdmap e529: 8 total, 8 up, 8 in 2026-03-10T13:49:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:13 vm09 ceph-mon[53367]: pgmap v806: 236 pgs: 236 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:14 vm05 ceph-mon[58955]: osdmap e530: 8 total, 8 up, 8 in 2026-03-10T13:49:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:14 vm05 ceph-mon[51512]: osdmap e530: 8 total, 8 up, 8 in 2026-03-10T13:49:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:14 vm09 ceph-mon[53367]: osdmap e530: 8 total, 8 up, 8 in 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[58955]: osdmap e531: 8 total, 8 up, 8 in 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[58955]: pgmap v809: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[51512]: osdmap e531: 8 total, 8 up, 8 in 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:15.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:15 vm05 ceph-mon[51512]: pgmap v809: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:15 vm09 ceph-mon[53367]: osdmap e531: 8 total, 8 up, 8 in 2026-03-10T13:49:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:15.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:15 vm09 ceph-mon[53367]: pgmap v809: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:16 vm09 ceph-mon[53367]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:16 vm09 ceph-mon[53367]: osdmap e532: 8 total, 8 up, 8 in 2026-03-10T13:49:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[58955]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[58955]: osdmap e532: 8 total, 8 up, 8 in 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[51512]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[51512]: osdmap e532: 8 total, 8 up, 8 in 2026-03-10T13:49:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: osdmap e533: 8 total, 8 up, 8 in 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: pgmap v812: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: osdmap e534: 8 total, 8 up, 8 in 2026-03-10T13:49:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: osdmap e533: 8 total, 8 up, 8 in 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: pgmap v812: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: osdmap e534: 8 total, 8 up, 8 in 2026-03-10T13:49:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: osdmap e533: 8 total, 8 up, 8 in 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: pgmap v812: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-109", "overlaypool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: osdmap e534: 8 total, 8 up, 8 in 2026-03-10T13:49:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T13:49:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:18 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:18.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:18.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:18 vm09 ceph-mon[53367]: osdmap e535: 8 total, 8 up, 8 in 2026-03-10T13:49:18.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[58955]: osdmap e535: 8 total, 8 up, 8 in 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-109-cache", "mode": "writeback"}]': finished 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[51512]: osdmap e535: 8 total, 8 up, 8 in 2026-03-10T13:49:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:49:19.105 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:49:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:19 vm09 ceph-mon[53367]: pgmap v815: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:49:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:19 vm09 ceph-mon[53367]: osdmap e536: 8 total, 8 up, 8 in 2026-03-10T13:49:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:49:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[58955]: pgmap v815: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:49:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[58955]: osdmap e536: 8 total, 8 up, 8 in 2026-03-10T13:49:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:49:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[51512]: pgmap v815: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 1015 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:49:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[51512]: osdmap e536: 8 total, 8 up, 8 in 2026-03-10T13:49:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:49:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:49:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:21 vm09 ceph-mon[53367]: osdmap e537: 8 total, 8 up, 8 in 2026-03-10T13:49:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:49:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:21 vm09 ceph-mon[53367]: pgmap v818: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[58955]: osdmap e537: 8 total, 8 up, 8 in 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[58955]: pgmap v818: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[51512]: osdmap e537: 8 total, 8 up, 8 in 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:49:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:21 vm05 ceph-mon[51512]: pgmap v818: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: osdmap e538: 8 total, 8 up, 8 in 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T13:49:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:22 vm09 ceph-mon[53367]: osdmap e539: 8 total, 8 up, 8 in 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: osdmap e538: 8 total, 8 up, 8 in 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T13:49:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[58955]: osdmap e539: 8 total, 8 up, 8 in 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: osdmap e538: 8 total, 8 up, 8 in 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T13:49:22.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:22 vm05 ceph-mon[51512]: osdmap e539: 8 total, 8 up, 8 in 2026-03-10T13:49:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:23 vm09 ceph-mon[53367]: pgmap v821: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[58955]: pgmap v821: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[51512]: pgmap v821: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:24 vm09 ceph-mon[53367]: osdmap e540: 8 total, 8 up, 8 in 2026-03-10T13:49:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[58955]: osdmap e540: 8 total, 8 up, 8 in 2026-03-10T13:49:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[51512]: osdmap e540: 8 total, 8 up, 8 in 2026-03-10T13:49:24.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]: dispatch 2026-03-10T13:49:25.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:25 vm09 ceph-mon[53367]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 2 op/s 2026-03-10T13:49:25.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:25.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:25 vm09 ceph-mon[53367]: osdmap e541: 8 total, 8 up, 8 in 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[58955]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 2 op/s 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[58955]: osdmap e541: 8 total, 8 up, 8 in 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[51512]: pgmap v823: 276 pgs: 276 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 2 op/s 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-109", "tierpool": "test-rados-api-vm05-91276-109-cache"}]': finished 2026-03-10T13:49:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:25 vm05 ceph-mon[51512]: osdmap e541: 8 total, 8 up, 8 in 2026-03-10T13:49:26.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:26 vm09 ceph-mon[53367]: osdmap e542: 8 total, 8 up, 8 in 2026-03-10T13:49:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:26 vm05 ceph-mon[58955]: osdmap e542: 8 total, 8 up, 8 in 2026-03-10T13:49:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:26 vm05 ceph-mon[51512]: osdmap e542: 8 total, 8 up, 8 in 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[58955]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[58955]: osdmap e543: 8 total, 8 up, 8 in 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[51512]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[51512]: osdmap e543: 8 total, 8 up, 8 in 2026-03-10T13:49:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:27 vm09 ceph-mon[53367]: pgmap v826: 244 pgs: 244 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 252 B/s wr, 1 op/s 2026-03-10T13:49:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:27 vm09 ceph-mon[53367]: osdmap e543: 8 total, 8 up, 8 in 2026-03-10T13:49:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[58955]: osdmap e544: 8 total, 8 up, 8 in 2026-03-10T13:49:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[58955]: osdmap e545: 8 total, 8 up, 8 in 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[51512]: osdmap e544: 8 total, 8 up, 8 in 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:28 vm05 ceph-mon[51512]: osdmap e545: 8 total, 8 up, 8 in 2026-03-10T13:49:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:28 vm09 ceph-mon[53367]: osdmap e544: 8 total, 8 up, 8 in 2026-03-10T13:49:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]: dispatch 2026-03-10T13:49:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/893265441' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-109"}]': finished 2026-03-10T13:49:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:28 vm09 ceph-mon[53367]: osdmap e545: 8 total, 8 up, 8 in 2026-03-10T13:49:29.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[58955]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[51512]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:29 vm09 ceph-mon[53367]: pgmap v829: 236 pgs: 236 active+clean; 455 KiB data, 1016 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:29 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:30.274 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[58955]: osdmap e546: 8 total, 8 up, 8 in 2026-03-10T13:49:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:30.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[51512]: osdmap e546: 8 total, 8 up, 8 in 2026-03-10T13:49:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:30 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:30 vm09 ceph-mon[53367]: osdmap e546: 8 total, 8 up, 8 in 2026-03-10T13:49:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:30 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[58955]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[58955]: osdmap e547: 8 total, 8 up, 8 in 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[51512]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[51512]: osdmap e547: 8 total, 8 up, 8 in 2026-03-10T13:49:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:31 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:31 vm09 ceph-mon[53367]: pgmap v832: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:31 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4254525135' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:31 vm09 ceph-mon[53367]: osdmap e547: 8 total, 8 up, 8 in 2026-03-10T13:49:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:31 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]: dispatch 2026-03-10T13:49:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: osdmap e548: 8 total, 8 up, 8 in 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: osdmap e548: 8 total, 8 up, 8 in 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:32 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.50530 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm05-91276-104"}]': finished 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: osdmap e548: 8 total, 8 up, 8 in 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:32 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T13:49:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[58955]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[58955]: osdmap e549: 8 total, 8 up, 8 in 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[51512]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[51512]: osdmap e549: 8 total, 8 up, 8 in 2026-03-10T13:49:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:33 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:33 vm09 ceph-mon[53367]: pgmap v835: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T13:49:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:33 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm05-91276-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T13:49:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:33 vm09 ceph-mon[53367]: osdmap e549: 8 total, 8 up, 8 in 2026-03-10T13:49:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:33 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:34 vm05 ceph-mon[58955]: osdmap e550: 8 total, 8 up, 8 in 2026-03-10T13:49:34.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:34 vm05 ceph-mon[51512]: osdmap e550: 8 total, 8 up, 8 in 2026-03-10T13:49:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:34 vm09 ceph-mon[53367]: osdmap e550: 8 total, 8 up, 8 in 2026-03-10T13:49:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[58955]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:35.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[58955]: osdmap e551: 8 total, 8 up, 8 in 2026-03-10T13:49:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[51512]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:35 vm05 ceph-mon[51512]: osdmap e551: 8 total, 8 up, 8 in 2026-03-10T13:49:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:35 vm09 ceph-mon[53367]: pgmap v838: 228 pgs: 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:35 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm05-91276-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:35 vm09 ceph-mon[53367]: osdmap e551: 8 total, 8 up, 8 in 2026-03-10T13:49:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:36 vm09 ceph-mon[53367]: osdmap e552: 8 total, 8 up, 8 in 2026-03-10T13:49:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:36 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[58955]: osdmap e552: 8 total, 8 up, 8 in 2026-03-10T13:49:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[51512]: osdmap e552: 8 total, 8 up, 8 in 2026-03-10T13:49:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:36 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:37 vm09 ceph-mon[53367]: pgmap v841: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:37 vm09 ceph-mon[53367]: osdmap e553: 8 total, 8 up, 8 in 2026-03-10T13:49:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[58955]: pgmap v841: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[58955]: osdmap e553: 8 total, 8 up, 8 in 2026-03-10T13:49:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[51512]: pgmap v841: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[51512]: osdmap e553: 8 total, 8 up, 8 in 2026-03-10T13:49:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:38 vm09 ceph-mon[53367]: osdmap e554: 8 total, 8 up, 8 in 2026-03-10T13:49:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[58955]: osdmap e554: 8 total, 8 up, 8 in 2026-03-10T13:49:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[51512]: osdmap e554: 8 total, 8 up, 8 in 2026-03-10T13:49:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:39.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: pgmap v844: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: osdmap e555: 8 total, 8 up, 8 in 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: pgmap v844: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: osdmap e555: 8 total, 8 up, 8 in 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: pgmap v844: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 1017 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: osdmap e555: 8 total, 8 up, 8 in 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[58955]: osdmap e556: 8 total, 8 up, 8 in 2026-03-10T13:49:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[51512]: osdmap e556: 8 total, 8 up, 8 in 2026-03-10T13:49:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:40 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:40.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:40 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:40.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:40 vm09 ceph-mon[53367]: osdmap e556: 8 total, 8 up, 8 in 2026-03-10T13:49:40.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:40 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:40.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:40 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]: dispatch 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[51512]: pgmap v847: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[51512]: osdmap e557: 8 total, 8 up, 8 in 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[58955]: pgmap v847: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[58955]: osdmap e557: 8 total, 8 up, 8 in 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:41.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:41 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:41.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:41 vm09 ceph-mon[53367]: pgmap v847: 268 pgs: 268 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T13:49:41.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:41 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-112"}]': finished 2026-03-10T13:49:41.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:41 vm09 ceph-mon[53367]: osdmap e557: 8 total, 8 up, 8 in 2026-03-10T13:49:41.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:41 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:41.917 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:41 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:49:42.502 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:42 vm09 ceph-mon[53367]: osdmap e558: 8 total, 8 up, 8 in 2026-03-10T13:49:42.502 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:42 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:42.502 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:42 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[58955]: osdmap e558: 8 total, 8 up, 8 in 2026-03-10T13:49:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[51512]: osdmap e558: 8 total, 8 up, 8 in 2026-03-10T13:49:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:42 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: osdmap e559: 8 total, 8 up, 8 in 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: osdmap e559: 8 total, 8 up, 8 in 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:43 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: pgmap v850: 236 pgs: 236 active+clean; 455 KiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: osdmap e559: 8 total, 8 up, 8 in 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:43 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[58955]: osdmap e560: 8 total, 8 up, 8 in 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[51512]: osdmap e560: 8 total, 8 up, 8 in 2026-03-10T13:49:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:44.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:44.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:44 vm09 ceph-mon[53367]: osdmap e560: 8 total, 8 up, 8 in 2026-03-10T13:49:44.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:44.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:45 vm09 ceph-mon[53367]: pgmap v853: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:49:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:45 vm09 ceph-mon[53367]: osdmap e561: 8 total, 8 up, 8 in 2026-03-10T13:49:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[58955]: pgmap v853: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:49:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[58955]: osdmap e561: 8 total, 8 up, 8 in 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[51512]: pgmap v853: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[51512]: osdmap e561: 8 total, 8 up, 8 in 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[58955]: osdmap e562: 8 total, 8 up, 8 in 2026-03-10T13:49:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[51512]: osdmap e562: 8 total, 8 up, 8 in 2026-03-10T13:49:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:46.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:46 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:46 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:46 vm09 ceph-mon[53367]: osdmap e562: 8 total, 8 up, 8 in 2026-03-10T13:49:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:46 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]: dispatch 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: pgmap v856: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]': finished 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: osdmap e563: 8 total, 8 up, 8 in 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: pgmap v856: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]': finished 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: osdmap e563: 8 total, 8 up, 8 in 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: pgmap v856: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:49:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-114", "mode": "writeback"}]': finished 2026-03-10T13:49:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: osdmap e563: 8 total, 8 up, 8 in 2026-03-10T13:49:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[58955]: osdmap e564: 8 total, 8 up, 8 in 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[51512]: osdmap e564: 8 total, 8 up, 8 in 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:49:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:48 vm09 ceph-mon[53367]: osdmap e564: 8 total, 8 up, 8 in 2026-03-10T13:49:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]: dispatch 2026-03-10T13:49:48.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:49:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:49 vm09 ceph-mon[53367]: pgmap v859: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:49 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:49 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:49 vm09 ceph-mon[53367]: osdmap e565: 8 total, 8 up, 8 in 2026-03-10T13:49:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[58955]: pgmap v859: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[58955]: osdmap e565: 8 total, 8 up, 8 in 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[51512]: pgmap v859: 268 pgs: 17 creating+peering, 251 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-114"}]': finished 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[51512]: osdmap e565: 8 total, 8 up, 8 in 2026-03-10T13:49:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:49:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:49:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:50 vm09 ceph-mon[53367]: osdmap e566: 8 total, 8 up, 8 in 2026-03-10T13:49:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:50 vm05 ceph-mon[58955]: osdmap e566: 8 total, 8 up, 8 in 2026-03-10T13:49:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:50 vm05 ceph-mon[51512]: osdmap e566: 8 total, 8 up, 8 in 2026-03-10T13:49:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:51 vm09 ceph-mon[53367]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:51 vm09 ceph-mon[53367]: osdmap e567: 8 total, 8 up, 8 in 2026-03-10T13:49:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[58955]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[58955]: osdmap e567: 8 total, 8 up, 8 in 2026-03-10T13:49:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:52.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[51512]: pgmap v862: 236 pgs: 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[51512]: osdmap e567: 8 total, 8 up, 8 in 2026-03-10T13:49:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:49:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:52.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:52 vm09 ceph-mon[53367]: osdmap e568: 8 total, 8 up, 8 in 2026-03-10T13:49:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:52 vm05 ceph-mon[58955]: osdmap e568: 8 total, 8 up, 8 in 2026-03-10T13:49:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:49:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:52 vm05 ceph-mon[51512]: osdmap e568: 8 total, 8 up, 8 in 2026-03-10T13:49:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:53 vm09 ceph-mon[53367]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:53 vm09 ceph-mon[53367]: osdmap e569: 8 total, 8 up, 8 in 2026-03-10T13:49:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[58955]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[58955]: osdmap e569: 8 total, 8 up, 8 in 2026-03-10T13:49:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[51512]: pgmap v865: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:49:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[51512]: osdmap e569: 8 total, 8 up, 8 in 2026-03-10T13:49:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:49:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[58955]: osdmap e570: 8 total, 8 up, 8 in 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[51512]: osdmap e570: 8 total, 8 up, 8 in 2026-03-10T13:49:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:49:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:54 vm09 ceph-mon[53367]: osdmap e570: 8 total, 8 up, 8 in 2026-03-10T13:49:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:49:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[58955]: pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:49:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[58955]: osdmap e571: 8 total, 8 up, 8 in 2026-03-10T13:49:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[51512]: pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:49:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[51512]: osdmap e571: 8 total, 8 up, 8 in 2026-03-10T13:49:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:55 vm09 ceph-mon[53367]: pgmap v868: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:49:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:55 vm09 ceph-mon[53367]: osdmap e571: 8 total, 8 up, 8 in 2026-03-10T13:49:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]: dispatch 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]': finished 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: osdmap e572: 8 total, 8 up, 8 in 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: 318.5 scrub starts 2026-03-10T13:49:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[58955]: 318.5 scrub ok 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]': finished 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: osdmap e572: 8 total, 8 up, 8 in 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: 318.5 scrub starts 2026-03-10T13:49:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:56 vm05 ceph-mon[51512]: 318.5 scrub ok 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-116", "mode": "writeback"}]': finished 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: osdmap e572: 8 total, 8 up, 8 in 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: from='mon.? v1:192.168.123.105:0/3613634182' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.5"}]: dispatch 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: 318.5 scrub starts 2026-03-10T13:49:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:56 vm09 ceph-mon[53367]: 318.5 scrub ok 2026-03-10T13:49:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:57 vm05 ceph-mon[58955]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:57 vm05 ceph-mon[51512]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:57 vm09 ceph-mon[53367]: pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:49:59.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:49:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:59 vm05 ceph-mon[58955]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 929 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T13:50:00.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:49:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:59 vm05 ceph-mon[51512]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 929 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T13:50:00.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:49:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:00.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:49:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:49:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:59 vm09 ceph-mon[53367]: pgmap v872: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 929 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T13:50:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:49:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T13:50:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: pool 'test-rados-api-vm05-91276-116' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T13:50:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: application not enabled on pool 'test-rados-api-vm05-91276-111' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[58955]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: pool 'test-rados-api-vm05-91276-116' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: application not enabled on pool 'test-rados-api-vm05-91276-111' 2026-03-10T13:50:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:00 vm05 ceph-mon[51512]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: pool 'test-rados-api-vm05-91276-116' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: application not enabled on pool 'test-rados-api-vm05-91276-111' 2026-03-10T13:50:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:00 vm09 ceph-mon[53367]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T13:50:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:01 vm05 ceph-mon[58955]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 316 B/s wr, 1 op/s 2026-03-10T13:50:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:01 vm05 ceph-mon[51512]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 316 B/s wr, 1 op/s 2026-03-10T13:50:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:01 vm09 ceph-mon[53367]: pgmap v873: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 316 B/s wr, 1 op/s 2026-03-10T13:50:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:03 vm05 ceph-mon[58955]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-10T13:50:04.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:03 vm05 ceph-mon[51512]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-10T13:50:04.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:03 vm09 ceph-mon[53367]: pgmap v874: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 1 op/s 2026-03-10T13:50:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:05 vm09 ceph-mon[53367]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:50:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:05 vm05 ceph-mon[58955]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:50:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:05 vm05 ceph-mon[51512]: pgmap v875: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:50:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:07 vm09 ceph-mon[53367]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:07 vm05 ceph-mon[58955]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:07 vm05 ceph-mon[51512]: pgmap v876: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:09.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:09 vm05 ceph-mon[58955]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:50:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:09 vm05 ceph-mon[51512]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:50:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:09 vm09 ceph-mon[53367]: pgmap v877: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T13:50:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[58955]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:50:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[58955]: osdmap e573: 8 total, 8 up, 8 in 2026-03-10T13:50:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[51512]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:50:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[51512]: osdmap e573: 8 total, 8 up, 8 in 2026-03-10T13:50:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:12.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:11 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:11 vm09 ceph-mon[53367]: pgmap v878: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T13:50:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:11 vm09 ceph-mon[53367]: osdmap e573: 8 total, 8 up, 8 in 2026-03-10T13:50:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:11 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[58955]: osdmap e574: 8 total, 8 up, 8 in 2026-03-10T13:50:13.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:13.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:13.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:13.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[51512]: osdmap e574: 8 total, 8 up, 8 in 2026-03-10T13:50:13.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:12 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:13.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:12 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:13.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:13.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:12 vm09 ceph-mon[53367]: osdmap e574: 8 total, 8 up, 8 in 2026-03-10T13:50:13.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:12 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]: dispatch 2026-03-10T13:50:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[58955]: pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:50:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:50:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[58955]: osdmap e575: 8 total, 8 up, 8 in 2026-03-10T13:50:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[51512]: pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:50:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:50:14.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:13 vm05 ceph-mon[51512]: osdmap e575: 8 total, 8 up, 8 in 2026-03-10T13:50:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:13 vm09 ceph-mon[53367]: pgmap v881: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:50:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:13 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:13 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-116"}]': finished 2026-03-10T13:50:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:13 vm09 ceph-mon[53367]: osdmap e575: 8 total, 8 up, 8 in 2026-03-10T13:50:15.324 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:14 vm09 ceph-mon[53367]: osdmap e576: 8 total, 8 up, 8 in 2026-03-10T13:50:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:14 vm05 ceph-mon[58955]: osdmap e576: 8 total, 8 up, 8 in 2026-03-10T13:50:15.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:14 vm05 ceph-mon[51512]: osdmap e576: 8 total, 8 up, 8 in 2026-03-10T13:50:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[58955]: pgmap v884: 236 pgs: 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[58955]: osdmap e577: 8 total, 8 up, 8 in 2026-03-10T13:50:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[51512]: pgmap v884: 236 pgs: 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[51512]: osdmap e577: 8 total, 8 up, 8 in 2026-03-10T13:50:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:16 vm09 ceph-mon[53367]: pgmap v884: 236 pgs: 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:16 vm09 ceph-mon[53367]: osdmap e577: 8 total, 8 up, 8 in 2026-03-10T13:50:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: osdmap e578: 8 total, 8 up, 8 in 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: pgmap v887: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: osdmap e579: 8 total, 8 up, 8 in 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:17.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: osdmap e578: 8 total, 8 up, 8 in 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: pgmap v887: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: osdmap e579: 8 total, 8 up, 8 in 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:17.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:17 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: osdmap e578: 8 total, 8 up, 8 in 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: pgmap v887: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: osdmap e579: 8 total, 8 up, 8 in 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:17 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:19.018 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:19.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: osdmap e580: 8 total, 8 up, 8 in 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: pgmap v890: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: osdmap e580: 8 total, 8 up, 8 in 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: pgmap v890: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: osdmap e580: 8 total, 8 up, 8 in 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: pgmap v890: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 233 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[58955]: osdmap e581: 8 total, 8 up, 8 in 2026-03-10T13:50:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[51512]: osdmap e581: 8 total, 8 up, 8 in 2026-03-10T13:50:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:20.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:20.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:20 vm09 ceph-mon[53367]: osdmap e581: 8 total, 8 up, 8 in 2026-03-10T13:50:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:20.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]: dispatch 2026-03-10T13:50:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]': finished 2026-03-10T13:50:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[58955]: osdmap e582: 8 total, 8 up, 8 in 2026-03-10T13:50:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[58955]: pgmap v893: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]': finished 2026-03-10T13:50:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[51512]: osdmap e582: 8 total, 8 up, 8 in 2026-03-10T13:50:21.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:21 vm05 ceph-mon[51512]: pgmap v893: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:21 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-118", "mode": "writeback"}]': finished 2026-03-10T13:50:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:21 vm09 ceph-mon[53367]: osdmap e582: 8 total, 8 up, 8 in 2026-03-10T13:50:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:21 vm09 ceph-mon[53367]: pgmap v893: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[58955]: osdmap e583: 8 total, 8 up, 8 in 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[51512]: osdmap e583: 8 total, 8 up, 8 in 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:22 vm09 ceph-mon[53367]: osdmap e583: 8 total, 8 up, 8 in 2026-03-10T13:50:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:22.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: osdmap e584: 8 total, 8 up, 8 in 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: pgmap v896: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:23.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: osdmap e584: 8 total, 8 up, 8 in 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: pgmap v896: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: osdmap e584: 8 total, 8 up, 8 in 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]: dispatch 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: pgmap v896: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:23.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:24 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:24 vm09 ceph-mon[53367]: osdmap e585: 8 total, 8 up, 8 in 2026-03-10T13:50:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[58955]: osdmap e585: 8 total, 8 up, 8 in 2026-03-10T13:50:24.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:24.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-118"}]': finished 2026-03-10T13:50:24.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:24 vm05 ceph-mon[51512]: osdmap e585: 8 total, 8 up, 8 in 2026-03-10T13:50:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:25 vm05 ceph-mon[58955]: osdmap e586: 8 total, 8 up, 8 in 2026-03-10T13:50:25.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:25 vm05 ceph-mon[58955]: pgmap v899: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:25 vm05 ceph-mon[51512]: osdmap e586: 8 total, 8 up, 8 in 2026-03-10T13:50:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:25 vm05 ceph-mon[51512]: pgmap v899: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:25.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:25 vm09 ceph-mon[53367]: osdmap e586: 8 total, 8 up, 8 in 2026-03-10T13:50:25.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:25 vm09 ceph-mon[53367]: pgmap v899: 236 pgs: 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[58955]: osdmap e587: 8 total, 8 up, 8 in 2026-03-10T13:50:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[51512]: osdmap e587: 8 total, 8 up, 8 in 2026-03-10T13:50:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:26.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:26 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:26 vm09 ceph-mon[53367]: osdmap e587: 8 total, 8 up, 8 in 2026-03-10T13:50:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:26.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:26 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: pgmap v901: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: osdmap e588: 8 total, 8 up, 8 in 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: osdmap e589: 8 total, 8 up, 8 in 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: pgmap v901: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: osdmap e588: 8 total, 8 up, 8 in 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: osdmap e589: 8 total, 8 up, 8 in 2026-03-10T13:50:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: pgmap v901: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: osdmap e588: 8 total, 8 up, 8 in 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: osdmap e589: 8 total, 8 up, 8 in 2026-03-10T13:50:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[58955]: osdmap e590: 8 total, 8 up, 8 in 2026-03-10T13:50:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:28.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[51512]: osdmap e590: 8 total, 8 up, 8 in 2026-03-10T13:50:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:28 vm09 ceph-mon[53367]: osdmap e590: 8 total, 8 up, 8 in 2026-03-10T13:50:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:28.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]: dispatch 2026-03-10T13:50:29.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]': finished 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: osdmap e591: 8 total, 8 up, 8 in 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]': finished 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: osdmap e591: 8 total, 8 up, 8 in 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: pgmap v905: 268 pgs: 32 unknown, 1 active+clean+snaptrim, 235 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-120", "mode": "writeback"}]': finished 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: osdmap e591: 8 total, 8 up, 8 in 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:30.317 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[58955]: osdmap e592: 8 total, 8 up, 8 in 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[51512]: osdmap e592: 8 total, 8 up, 8 in 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:30 vm09 ceph-mon[53367]: osdmap e592: 8 total, 8 up, 8 in 2026-03-10T13:50:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]: dispatch 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 773 B/s wr, 3 op/s 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: osdmap e593: 8 total, 8 up, 8 in 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[58955]: osdmap e594: 8 total, 8 up, 8 in 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 773 B/s wr, 3 op/s 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: osdmap e593: 8 total, 8 up, 8 in 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:31.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:31 vm05 ceph-mon[51512]: osdmap e594: 8 total, 8 up, 8 in 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 773 B/s wr, 3 op/s 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-120"}]': finished 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: osdmap e593: 8 total, 8 up, 8 in 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:50:31.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:31 vm09 ceph-mon[53367]: osdmap e594: 8 total, 8 up, 8 in 2026-03-10T13:50:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[58955]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[58955]: osdmap e595: 8 total, 8 up, 8 in 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[51512]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[51512]: osdmap e595: 8 total, 8 up, 8 in 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:33 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:33 vm09 ceph-mon[53367]: pgmap v911: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:50:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:33 vm09 ceph-mon[53367]: osdmap e595: 8 total, 8 up, 8 in 2026-03-10T13:50:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:33 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:34.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: osdmap e596: 8 total, 8 up, 8 in 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: osdmap e597: 8 total, 8 up, 8 in 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: osdmap e596: 8 total, 8 up, 8 in 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: osdmap e597: 8 total, 8 up, 8 in 2026-03-10T13:50:34.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: osdmap e596: 8 total, 8 up, 8 in 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: osdmap e597: 8 total, 8 up, 8 in 2026-03-10T13:50:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[58955]: pgmap v914: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:50:35.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[58955]: osdmap e598: 8 total, 8 up, 8 in 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[51512]: pgmap v914: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[51512]: osdmap e598: 8 total, 8 up, 8 in 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:35.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:35 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:35 vm09 ceph-mon[53367]: pgmap v914: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:50:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:35 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:35 vm09 ceph-mon[53367]: osdmap e598: 8 total, 8 up, 8 in 2026-03-10T13:50:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:35 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]: dispatch 2026-03-10T13:50:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]': finished 2026-03-10T13:50:36.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[58955]: osdmap e599: 8 total, 8 up, 8 in 2026-03-10T13:50:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]': finished 2026-03-10T13:50:36.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:36 vm05 ceph-mon[51512]: osdmap e599: 8 total, 8 up, 8 in 2026-03-10T13:50:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:36 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-122", "mode": "writeback"}]': finished 2026-03-10T13:50:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:36 vm09 ceph-mon[53367]: osdmap e599: 8 total, 8 up, 8 in 2026-03-10T13:50:37.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[58955]: pgmap v917: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:37.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[51512]: pgmap v917: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:37.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:37 vm09 ceph-mon[53367]: pgmap v917: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[58955]: osdmap e600: 8 total, 8 up, 8 in 2026-03-10T13:50:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[51512]: osdmap e600: 8 total, 8 up, 8 in 2026-03-10T13:50:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:38 vm09 ceph-mon[53367]: osdmap e600: 8 total, 8 up, 8 in 2026-03-10T13:50:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]: dispatch 2026-03-10T13:50:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:39.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[58955]: pgmap v920: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:39.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[58955]: osdmap e601: 8 total, 8 up, 8 in 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[51512]: pgmap v920: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[51512]: osdmap e601: 8 total, 8 up, 8 in 2026-03-10T13:50:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:39 vm09 ceph-mon[53367]: pgmap v920: 268 pgs: 5 creating+activating, 5 creating+peering, 258 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:39 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:39 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-122"}]': finished 2026-03-10T13:50:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:39 vm09 ceph-mon[53367]: osdmap e601: 8 total, 8 up, 8 in 2026-03-10T13:50:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:40 vm09 ceph-mon[53367]: osdmap e602: 8 total, 8 up, 8 in 2026-03-10T13:50:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:40 vm05 ceph-mon[58955]: osdmap e602: 8 total, 8 up, 8 in 2026-03-10T13:50:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:40 vm05 ceph-mon[51512]: osdmap e602: 8 total, 8 up, 8 in 2026-03-10T13:50:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:41 vm09 ceph-mon[53367]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:41.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:41 vm09 ceph-mon[53367]: osdmap e603: 8 total, 8 up, 8 in 2026-03-10T13:50:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:41 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[58955]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[58955]: osdmap e603: 8 total, 8 up, 8 in 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[51512]: pgmap v923: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[51512]: osdmap e603: 8 total, 8 up, 8 in 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:41 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:42 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:42 vm09 ceph-mon[53367]: osdmap e604: 8 total, 8 up, 8 in 2026-03-10T13:50:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:42 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:42 vm05 ceph-mon[58955]: osdmap e604: 8 total, 8 up, 8 in 2026-03-10T13:50:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:42 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:42.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:42 vm05 ceph-mon[51512]: osdmap e604: 8 total, 8 up, 8 in 2026-03-10T13:50:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:43 vm09 ceph-mon[53367]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:43 vm09 ceph-mon[53367]: osdmap e605: 8 total, 8 up, 8 in 2026-03-10T13:50:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:43 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:43 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[58955]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[58955]: osdmap e605: 8 total, 8 up, 8 in 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[51512]: pgmap v926: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:50:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[51512]: osdmap e605: 8 total, 8 up, 8 in 2026-03-10T13:50:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:43 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:44.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:44.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: osdmap e606: 8 total, 8 up, 8 in 2026-03-10T13:50:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: osdmap e606: 8 total, 8 up, 8 in 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: osdmap e606: 8 total, 8 up, 8 in 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: pgmap v929: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: osdmap e607: 8 total, 8 up, 8 in 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]': finished 2026-03-10T13:50:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:45 vm09 ceph-mon[53367]: osdmap e608: 8 total, 8 up, 8 in 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: pgmap v929: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: osdmap e607: 8 total, 8 up, 8 in 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]': finished 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[51512]: osdmap e608: 8 total, 8 up, 8 in 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: pgmap v929: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: osdmap e607: 8 total, 8 up, 8 in 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]: dispatch 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-124", "mode": "writeback"}]': finished 2026-03-10T13:50:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:45 vm05 ceph-mon[58955]: osdmap e608: 8 total, 8 up, 8 in 2026-03-10T13:50:47.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:47 vm09 ceph-mon[53367]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:47.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:47 vm09 ceph-mon[53367]: osdmap e609: 8 total, 8 up, 8 in 2026-03-10T13:50:47.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:47.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[58955]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[58955]: osdmap e609: 8 total, 8 up, 8 in 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[51512]: pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[51512]: osdmap e609: 8 total, 8 up, 8 in 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:48.767 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:48.767 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:48.767 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:48 vm09 ceph-mon[53367]: osdmap e610: 8 total, 8 up, 8 in 2026-03-10T13:50:48.767 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[58955]: osdmap e610: 8 total, 8 up, 8 in 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[51512]: osdmap e610: 8 total, 8 up, 8 in 2026-03-10T13:50:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]: dispatch 2026-03-10T13:50:49.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[58955]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[58955]: osdmap e611: 8 total, 8 up, 8 in 2026-03-10T13:50:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[51512]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[51512]: osdmap e611: 8 total, 8 up, 8 in 2026-03-10T13:50:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:49 vm09 ceph-mon[53367]: pgmap v935: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:49 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:49 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-124"}]': finished 2026-03-10T13:50:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:49 vm09 ceph-mon[53367]: osdmap e611: 8 total, 8 up, 8 in 2026-03-10T13:50:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:50:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:50 vm09 ceph-mon[53367]: osdmap e612: 8 total, 8 up, 8 in 2026-03-10T13:50:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:50 vm05 ceph-mon[51512]: osdmap e612: 8 total, 8 up, 8 in 2026-03-10T13:50:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:50 vm05 ceph-mon[58955]: osdmap e612: 8 total, 8 up, 8 in 2026-03-10T13:50:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:51 vm09 ceph-mon[53367]: pgmap v938: 236 pgs: 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:51 vm09 ceph-mon[53367]: osdmap e613: 8 total, 8 up, 8 in 2026-03-10T13:50:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[51512]: pgmap v938: 236 pgs: 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[51512]: osdmap e613: 8 total, 8 up, 8 in 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[58955]: pgmap v938: 236 pgs: 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[58955]: osdmap e613: 8 total, 8 up, 8 in 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[58955]: osdmap e614: 8 total, 8 up, 8 in 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[51512]: osdmap e614: 8 total, 8 up, 8 in 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:53.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:52 vm09 ceph-mon[53367]: osdmap e614: 8 total, 8 up, 8 in 2026-03-10T13:50:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:53.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: pgmap v941: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: osdmap e615: 8 total, 8 up, 8 in 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: pgmap v941: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: osdmap e615: 8 total, 8 up, 8 in 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: pgmap v941: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: osdmap e615: 8 total, 8 up, 8 in 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:50:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:54 vm09 ceph-mon[53367]: osdmap e616: 8 total, 8 up, 8 in 2026-03-10T13:50:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:55.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[58955]: osdmap e616: 8 total, 8 up, 8 in 2026-03-10T13:50:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:55.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:55.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[51512]: osdmap e616: 8 total, 8 up, 8 in 2026-03-10T13:50:55.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:55.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]: dispatch 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]': finished 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: osdmap e617: 8 total, 8 up, 8 in 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:56.246 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]': finished 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: osdmap e617: 8 total, 8 up, 8 in 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-126", "mode": "writeback"}]': finished 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: osdmap e617: 8 total, 8 up, 8 in 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: osdmap e618: 8 total, 8 up, 8 in 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[58955]: osdmap e619: 8 total, 8 up, 8 in 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: osdmap e618: 8 total, 8 up, 8 in 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:56 vm05 ceph-mon[51512]: osdmap e619: 8 total, 8 up, 8 in 2026-03-10T13:50:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:50:57.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: osdmap e618: 8 total, 8 up, 8 in 2026-03-10T13:50:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]: dispatch 2026-03-10T13:50:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:50:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-126"}]': finished 2026-03-10T13:50:57.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:56 vm09 ceph-mon[53367]: osdmap e619: 8 total, 8 up, 8 in 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9135 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6081 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5163 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (25374 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (14143 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12055 ms) 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 00000012 2026-03-10T13:50:57.242 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12197 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6150 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (558957 ms total) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1017 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11107 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18211 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (1 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (30336 ms total) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7166 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8103 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: waiting for scrub... 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: done waiting 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24392 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (10149 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (7097 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8092 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (10229 ms) 2026-03-10T13:50:57.243 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-10T13:50:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:57 vm09 ceph-mon[53367]: pgmap v947: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:57 vm09 ceph-mon[53367]: osdmap e620: 8 total, 8 up, 8 in 2026-03-10T13:50:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:57 vm05 ceph-mon[58955]: pgmap v947: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:57 vm05 ceph-mon[58955]: osdmap e620: 8 total, 8 up, 8 in 2026-03-10T13:50:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:57 vm05 ceph-mon[51512]: pgmap v947: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:50:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:57 vm05 ceph-mon[51512]: osdmap e620: 8 total, 8 up, 8 in 2026-03-10T13:50:59.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:50:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:50:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: pgmap v950: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: osdmap e621: 8 total, 8 up, 8 in 2026-03-10T13:50:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[58955]: osdmap e622: 8 total, 8 up, 8 in 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: pgmap v950: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: osdmap e621: 8 total, 8 up, 8 in 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:50:59 vm05 ceph-mon[51512]: osdmap e622: 8 total, 8 up, 8 in 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: pgmap v950: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: osdmap e621: 8 total, 8 up, 8 in 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:50:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:50:59 vm09 ceph-mon[53367]: osdmap e622: 8 total, 8 up, 8 in 2026-03-10T13:51:00.283 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:50:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:50:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[58955]: osdmap e623: 8 total, 8 up, 8 in 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:00.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[51512]: osdmap e623: 8 total, 8 up, 8 in 2026-03-10T13:51:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:00 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:00 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:00 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:00 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:00 vm09 ceph-mon[53367]: osdmap e623: 8 total, 8 up, 8 in 2026-03-10T13:51:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:00 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[58955]: pgmap v953: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[58955]: osdmap e624: 8 total, 8 up, 8 in 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[51512]: pgmap v953: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[51512]: osdmap e624: 8 total, 8 up, 8 in 2026-03-10T13:51:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:01 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:01 vm09 ceph-mon[53367]: pgmap v953: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:01 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:01 vm09 ceph-mon[53367]: osdmap e624: 8 total, 8 up, 8 in 2026-03-10T13:51:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:01 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]: dispatch 2026-03-10T13:51:02.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:02 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:02.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:02 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:02.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:02 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[58955]: pgmap v956: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]': finished 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[58955]: osdmap e625: 8 total, 8 up, 8 in 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[51512]: pgmap v956: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]': finished 2026-03-10T13:51:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:03 vm05 ceph-mon[51512]: osdmap e625: 8 total, 8 up, 8 in 2026-03-10T13:51:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:03 vm09 ceph-mon[53367]: pgmap v956: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:03 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-128", "mode": "writeback"}]': finished 2026-03-10T13:51:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:03 vm09 ceph-mon[53367]: osdmap e625: 8 total, 8 up, 8 in 2026-03-10T13:51:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:05 vm05 ceph-mon[58955]: pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 416 B/s wr, 1 op/s 2026-03-10T13:51:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:05 vm05 ceph-mon[51512]: pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 416 B/s wr, 1 op/s 2026-03-10T13:51:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:05 vm09 ceph-mon[53367]: pgmap v958: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 416 B/s wr, 1 op/s 2026-03-10T13:51:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:07 vm05 ceph-mon[58955]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T13:51:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:07 vm05 ceph-mon[51512]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T13:51:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:07 vm09 ceph-mon[53367]: pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s 2026-03-10T13:51:08.780 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:08 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.780 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:08 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.780 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:08 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:08.780 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:09.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: pgmap v960: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 258 B/s wr, 1 op/s 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: osdmap e626: 8 total, 8 up, 8 in 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: pgmap v960: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 258 B/s wr, 1 op/s 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: osdmap e626: 8 total, 8 up, 8 in 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: pgmap v960: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 775 B/s rd, 258 B/s wr, 1 op/s 2026-03-10T13:51:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: osdmap e626: 8 total, 8 up, 8 in 2026-03-10T13:51:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]: dispatch 2026-03-10T13:51:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[58955]: osdmap e627: 8 total, 8 up, 8 in 2026-03-10T13:51:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:10.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:10 vm05 ceph-mon[51512]: osdmap e627: 8 total, 8 up, 8 in 2026-03-10T13:51:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:10 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:10.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:10 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-128"}]': finished 2026-03-10T13:51:10.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:10 vm09 ceph-mon[53367]: osdmap e627: 8 total, 8 up, 8 in 2026-03-10T13:51:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[58955]: pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T13:51:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[58955]: osdmap e628: 8 total, 8 up, 8 in 2026-03-10T13:51:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[51512]: pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T13:51:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:11 vm05 ceph-mon[51512]: osdmap e628: 8 total, 8 up, 8 in 2026-03-10T13:51:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:11 vm09 ceph-mon[53367]: pgmap v963: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 264 B/s wr, 1 op/s 2026-03-10T13:51:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:11 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:11 vm09 ceph-mon[53367]: osdmap e628: 8 total, 8 up, 8 in 2026-03-10T13:51:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[58955]: osdmap e629: 8 total, 8 up, 8 in 2026-03-10T13:51:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:12.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:12.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[51512]: osdmap e629: 8 total, 8 up, 8 in 2026-03-10T13:51:12.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:12.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:12 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:12.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:12 vm09 ceph-mon[53367]: osdmap e629: 8 total, 8 up, 8 in 2026-03-10T13:51:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:12 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:12.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:12 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: pgmap v966: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: osdmap e630: 8 total, 8 up, 8 in 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[58955]: osdmap e631: 8 total, 8 up, 8 in 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: pgmap v966: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: osdmap e630: 8 total, 8 up, 8 in 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:13.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:13 vm05 ceph-mon[51512]: osdmap e631: 8 total, 8 up, 8 in 2026-03-10T13:51:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: pgmap v966: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: osdmap e630: 8 total, 8 up, 8 in 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:13 vm09 ceph-mon[53367]: osdmap e631: 8 total, 8 up, 8 in 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[58955]: osdmap e632: 8 total, 8 up, 8 in 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:14.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:14 vm05 ceph-mon[51512]: osdmap e632: 8 total, 8 up, 8 in 2026-03-10T13:51:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:14 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:14 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:14.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:14 vm09 ceph-mon[53367]: osdmap e632: 8 total, 8 up, 8 in 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]': finished 2026-03-10T13:51:15.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:15 vm09 ceph-mon[53367]: osdmap e633: 8 total, 8 up, 8 in 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]': finished 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[58955]: osdmap e633: 8 total, 8 up, 8 in 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]: dispatch 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-130", "mode": "writeback"}]': finished 2026-03-10T13:51:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:15 vm05 ceph-mon[51512]: osdmap e633: 8 total, 8 up, 8 in 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:16 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:16.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:16 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:17 vm09 ceph-mon[53367]: pgmap v972: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:17 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:17 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:17 vm09 ceph-mon[53367]: osdmap e634: 8 total, 8 up, 8 in 2026-03-10T13:51:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:17 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[58955]: pgmap v972: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[58955]: osdmap e634: 8 total, 8 up, 8 in 2026-03-10T13:51:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[51512]: pgmap v972: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[51512]: osdmap e634: 8 total, 8 up, 8 in 2026-03-10T13:51:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:17 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]: dispatch 2026-03-10T13:51:18.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:18 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:18.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:18 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:18.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:18 vm09 ceph-mon[53367]: osdmap e635: 8 total, 8 up, 8 in 2026-03-10T13:51:18.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[58955]: osdmap e635: 8 total, 8 up, 8 in 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-130"}]': finished 2026-03-10T13:51:19.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:18 vm05 ceph-mon[51512]: osdmap e635: 8 total, 8 up, 8 in 2026-03-10T13:51:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:19 vm09 ceph-mon[53367]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:19.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:19 vm09 ceph-mon[53367]: osdmap e636: 8 total, 8 up, 8 in 2026-03-10T13:51:19.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:19.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[58955]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:19.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[58955]: osdmap e636: 8 total, 8 up, 8 in 2026-03-10T13:51:19.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[51512]: pgmap v975: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[51512]: osdmap e636: 8 total, 8 up, 8 in 2026-03-10T13:51:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:20.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:20 vm09 ceph-mon[53367]: osdmap e637: 8 total, 8 up, 8 in 2026-03-10T13:51:20.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:20.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[58955]: osdmap e637: 8 total, 8 up, 8 in 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[51512]: osdmap e637: 8 total, 8 up, 8 in 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: pgmap v978: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: osdmap e638: 8 total, 8 up, 8 in 2026-03-10T13:51:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:21 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: pgmap v978: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: osdmap e638: 8 total, 8 up, 8 in 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: pgmap v978: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: osdmap e638: 8 total, 8 up, 8 in 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:21 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[58955]: osdmap e639: 8 total, 8 up, 8 in 2026-03-10T13:51:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:23.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[51512]: osdmap e639: 8 total, 8 up, 8 in 2026-03-10T13:51:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:23.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:22 vm09 ceph-mon[53367]: osdmap e639: 8 total, 8 up, 8 in 2026-03-10T13:51:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:23.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: pgmap v981: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: osdmap e640: 8 total, 8 up, 8 in 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: pgmap v981: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: osdmap e640: 8 total, 8 up, 8 in 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: pgmap v981: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: osdmap e640: 8 total, 8 up, 8 in 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]: dispatch 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]': finished 2026-03-10T13:51:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[58955]: osdmap e641: 8 total, 8 up, 8 in 2026-03-10T13:51:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]': finished 2026-03-10T13:51:25.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:24 vm05 ceph-mon[51512]: osdmap e641: 8 total, 8 up, 8 in 2026-03-10T13:51:25.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:24 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:25.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-132", "mode": "writeback"}]': finished 2026-03-10T13:51:25.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:24 vm09 ceph-mon[53367]: osdmap e641: 8 total, 8 up, 8 in 2026-03-10T13:51:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:25 vm05 ceph-mon[58955]: pgmap v984: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:25 vm05 ceph-mon[58955]: osdmap e642: 8 total, 8 up, 8 in 2026-03-10T13:51:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:25 vm05 ceph-mon[51512]: pgmap v984: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:25 vm05 ceph-mon[51512]: osdmap e642: 8 total, 8 up, 8 in 2026-03-10T13:51:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:25 vm09 ceph-mon[53367]: pgmap v984: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:25 vm09 ceph-mon[53367]: osdmap e642: 8 total, 8 up, 8 in 2026-03-10T13:51:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:26 vm09 ceph-mon[53367]: osdmap e643: 8 total, 8 up, 8 in 2026-03-10T13:51:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:26 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:26.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:26 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[58955]: osdmap e643: 8 total, 8 up, 8 in 2026-03-10T13:51:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:27.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[51512]: osdmap e643: 8 total, 8 up, 8 in 2026-03-10T13:51:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:26 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[58955]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[58955]: osdmap e644: 8 total, 8 up, 8 in 2026-03-10T13:51:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[51512]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[51512]: osdmap e644: 8 total, 8 up, 8 in 2026-03-10T13:51:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:27 vm09 ceph-mon[53367]: pgmap v987: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:27 vm09 ceph-mon[53367]: osdmap e644: 8 total, 8 up, 8 in 2026-03-10T13:51:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:28 vm05 ceph-mon[58955]: osdmap e645: 8 total, 8 up, 8 in 2026-03-10T13:51:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:28 vm05 ceph-mon[51512]: osdmap e645: 8 total, 8 up, 8 in 2026-03-10T13:51:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:29.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:28 vm09 ceph-mon[53367]: osdmap e645: 8 total, 8 up, 8 in 2026-03-10T13:51:29.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[58955]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[58955]: osdmap e646: 8 total, 8 up, 8 in 2026-03-10T13:51:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[51512]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[51512]: osdmap e646: 8 total, 8 up, 8 in 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:30.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:29 vm09 ceph-mon[53367]: pgmap v990: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:29 vm09 ceph-mon[53367]: osdmap e646: 8 total, 8 up, 8 in 2026-03-10T13:51:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[58955]: osdmap e647: 8 total, 8 up, 8 in 2026-03-10T13:51:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[51512]: osdmap e647: 8 total, 8 up, 8 in 2026-03-10T13:51:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:30 vm09 ceph-mon[53367]: osdmap e647: 8 total, 8 up, 8 in 2026-03-10T13:51:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]: dispatch 2026-03-10T13:51:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[58955]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:51:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[58955]: osdmap e648: 8 total, 8 up, 8 in 2026-03-10T13:51:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[51512]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:51:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:31 vm05 ceph-mon[51512]: osdmap e648: 8 total, 8 up, 8 in 2026-03-10T13:51:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:31 vm09 ceph-mon[53367]: pgmap v993: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T13:51:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:31 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:31 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-132"}]': finished 2026-03-10T13:51:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:31 vm09 ceph-mon[53367]: osdmap e648: 8 total, 8 up, 8 in 2026-03-10T13:51:33.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:32 vm05 ceph-mon[58955]: osdmap e649: 8 total, 8 up, 8 in 2026-03-10T13:51:33.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:32 vm05 ceph-mon[51512]: osdmap e649: 8 total, 8 up, 8 in 2026-03-10T13:51:33.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:32 vm09 ceph-mon[53367]: osdmap e649: 8 total, 8 up, 8 in 2026-03-10T13:51:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[58955]: pgmap v996: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:51:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[58955]: osdmap e650: 8 total, 8 up, 8 in 2026-03-10T13:51:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[51512]: pgmap v996: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:51:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[51512]: osdmap e650: 8 total, 8 up, 8 in 2026-03-10T13:51:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:33 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:33 vm09 ceph-mon[53367]: pgmap v996: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T13:51:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:33 vm09 ceph-mon[53367]: osdmap e650: 8 total, 8 up, 8 in 2026-03-10T13:51:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:34.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:33 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:35.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:35.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:35.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:34 vm09 ceph-mon[53367]: osdmap e651: 8 total, 8 up, 8 in 2026-03-10T13:51:35.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[58955]: osdmap e651: 8 total, 8 up, 8 in 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[51512]: osdmap e651: 8 total, 8 up, 8 in 2026-03-10T13:51:35.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[58955]: pgmap v999: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[58955]: osdmap e652: 8 total, 8 up, 8 in 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[51512]: pgmap v999: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T13:51:36.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:36.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[51512]: osdmap e652: 8 total, 8 up, 8 in 2026-03-10T13:51:36.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:36.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:36 vm09 ceph-mon[53367]: pgmap v999: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T13:51:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:36 vm09 ceph-mon[53367]: osdmap e652: 8 total, 8 up, 8 in 2026-03-10T13:51:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:36.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: osdmap e653: 8 total, 8 up, 8 in 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]': finished 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: osdmap e654: 8 total, 8 up, 8 in 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:37.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: osdmap e653: 8 total, 8 up, 8 in 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]': finished 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: osdmap e654: 8 total, 8 up, 8 in 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:37.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: osdmap e653: 8 total, 8 up, 8 in 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]: dispatch 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-134", "mode": "writeback"}]': finished 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: osdmap e654: 8 total, 8 up, 8 in 2026-03-10T13:51:37.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:37.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: pgmap v1002: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: osdmap e655: 8 total, 8 up, 8 in 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: pgmap v1002: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: osdmap e655: 8 total, 8 up, 8 in 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:38.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: pgmap v1002: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: osdmap e655: 8 total, 8 up, 8 in 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]: dispatch 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:39 vm09 ceph-mon[53367]: pgmap v1005: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:39 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:39 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:39 vm09 ceph-mon[53367]: osdmap e656: 8 total, 8 up, 8 in 2026-03-10T13:51:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:39.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[58955]: pgmap v1005: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[58955]: osdmap e656: 8 total, 8 up, 8 in 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[51512]: pgmap v1005: 268 pgs: 32 creating+peering, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-134"}]': finished 2026-03-10T13:51:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[51512]: osdmap e656: 8 total, 8 up, 8 in 2026-03-10T13:51:39.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:40.269 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:40.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:40 vm05 ceph-mon[58955]: osdmap e657: 8 total, 8 up, 8 in 2026-03-10T13:51:40.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:40 vm05 ceph-mon[51512]: osdmap e657: 8 total, 8 up, 8 in 2026-03-10T13:51:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:40 vm09 ceph-mon[53367]: osdmap e657: 8 total, 8 up, 8 in 2026-03-10T13:51:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[58955]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[58955]: osdmap e658: 8 total, 8 up, 8 in 2026-03-10T13:51:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[51512]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[51512]: osdmap e658: 8 total, 8 up, 8 in 2026-03-10T13:51:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:41 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:41 vm09 ceph-mon[53367]: pgmap v1008: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:41 vm09 ceph-mon[53367]: osdmap e658: 8 total, 8 up, 8 in 2026-03-10T13:51:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:41 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:41 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:42 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:42 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:42 vm09 ceph-mon[53367]: osdmap e659: 8 total, 8 up, 8 in 2026-03-10T13:51:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:42 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:42.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[58955]: osdmap e659: 8 total, 8 up, 8 in 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[51512]: osdmap e659: 8 total, 8 up, 8 in 2026-03-10T13:51:42.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:42 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:43 vm09 ceph-mon[53367]: pgmap v1011: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:43 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:43 vm09 ceph-mon[53367]: osdmap e660: 8 total, 8 up, 8 in 2026-03-10T13:51:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:43 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:43 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[58955]: pgmap v1011: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[58955]: osdmap e660: 8 total, 8 up, 8 in 2026-03-10T13:51:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[51512]: pgmap v1011: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T13:51:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[51512]: osdmap e660: 8 total, 8 up, 8 in 2026-03-10T13:51:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:43 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: osdmap e661: 8 total, 8 up, 8 in 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]': finished 2026-03-10T13:51:44.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:44 vm09 ceph-mon[53367]: osdmap e662: 8 total, 8 up, 8 in 2026-03-10T13:51:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: osdmap e661: 8 total, 8 up, 8 in 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]': finished 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[51512]: osdmap e662: 8 total, 8 up, 8 in 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: osdmap e661: 8 total, 8 up, 8 in 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-136", "mode": "writeback"}]': finished 2026-03-10T13:51:44.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:44 vm05 ceph-mon[58955]: osdmap e662: 8 total, 8 up, 8 in 2026-03-10T13:51:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:45 vm09 ceph-mon[53367]: pgmap v1014: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:45 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:45.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:45 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[51512]: pgmap v1014: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[58955]: pgmap v1014: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:45.733 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:45 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[51512]: osdmap e663: 8 total, 8 up, 8 in 2026-03-10T13:51:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[58955]: osdmap e663: 8 total, 8 up, 8 in 2026-03-10T13:51:46.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:46 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:46 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:51:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:46 vm09 ceph-mon[53367]: osdmap e663: 8 total, 8 up, 8 in 2026-03-10T13:51:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:46 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]: dispatch 2026-03-10T13:51:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[58955]: pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[58955]: osdmap e664: 8 total, 8 up, 8 in 2026-03-10T13:51:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[51512]: pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:47 vm05 ceph-mon[51512]: osdmap e664: 8 total, 8 up, 8 in 2026-03-10T13:51:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:47 vm09 ceph-mon[53367]: pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:47 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:51:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-136"}]': finished 2026-03-10T13:51:47.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:47 vm09 ceph-mon[53367]: osdmap e664: 8 total, 8 up, 8 in 2026-03-10T13:51:48.815 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:48 vm09 ceph-mon[53367]: osdmap e665: 8 total, 8 up, 8 in 2026-03-10T13:51:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:48 vm05 ceph-mon[51512]: osdmap e665: 8 total, 8 up, 8 in 2026-03-10T13:51:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:48 vm05 ceph-mon[58955]: osdmap e665: 8 total, 8 up, 8 in 2026-03-10T13:51:49.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[51512]: pgmap v1020: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[51512]: osdmap e666: 8 total, 8 up, 8 in 2026-03-10T13:51:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[58955]: pgmap v1020: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[58955]: osdmap e666: 8 total, 8 up, 8 in 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:49 vm09 ceph-mon[53367]: pgmap v1020: 236 pgs: 236 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:49 vm09 ceph-mon[53367]: osdmap e666: 8 total, 8 up, 8 in 2026-03-10T13:51:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:49 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:49 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:51:50.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[51512]: osdmap e667: 8 total, 8 up, 8 in 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[58955]: osdmap e667: 8 total, 8 up, 8 in 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:50.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:50 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:50 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:50 vm09 ceph-mon[53367]: osdmap e667: 8 total, 8 up, 8 in 2026-03-10T13:51:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:50 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[51512]: pgmap v1023: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[51512]: osdmap e668: 8 total, 8 up, 8 in 2026-03-10T13:51:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[58955]: pgmap v1023: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[58955]: osdmap e668: 8 total, 8 up, 8 in 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:51.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:51 vm09 ceph-mon[53367]: pgmap v1023: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:51 vm09 ceph-mon[53367]: osdmap e668: 8 total, 8 up, 8 in 2026-03-10T13:51:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:51:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:51:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[51512]: osdmap e669: 8 total, 8 up, 8 in 2026-03-10T13:51:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:52.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:51:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[58955]: osdmap e669: 8 total, 8 up, 8 in 2026-03-10T13:51:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:52.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:51:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:52 vm09 ceph-mon[53367]: osdmap e669: 8 total, 8 up, 8 in 2026-03-10T13:51:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:52.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: pgmap v1026: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: osdmap e670: 8 total, 8 up, 8 in 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: pgmap v1026: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: osdmap e670: 8 total, 8 up, 8 in 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:53.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: pgmap v1026: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: osdmap e670: 8 total, 8 up, 8 in 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:51:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:54.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:51:54.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:54 vm05 ceph-mon[51512]: osdmap e671: 8 total, 8 up, 8 in 2026-03-10T13:51:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:51:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:54 vm05 ceph-mon[58955]: osdmap e671: 8 total, 8 up, 8 in 2026-03-10T13:51:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T13:51:54.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:54 vm09 ceph-mon[53367]: osdmap e671: 8 total, 8 up, 8 in 2026-03-10T13:51:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[51512]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[58955]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:55.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:55 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:55 vm09 ceph-mon[53367]: pgmap v1029: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:51:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:51:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:55 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:55 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]: dispatch 2026-03-10T13:51:56.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]': finished 2026-03-10T13:51:56.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:56 vm05 ceph-mon[51512]: osdmap e672: 8 total, 8 up, 8 in 2026-03-10T13:51:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]': finished 2026-03-10T13:51:56.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:56 vm05 ceph-mon[58955]: osdmap e672: 8 total, 8 up, 8 in 2026-03-10T13:51:56.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-138"}]': finished 2026-03-10T13:51:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:56 vm09 ceph-mon[53367]: osdmap e672: 8 total, 8 up, 8 in 2026-03-10T13:51:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[51512]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:51:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[51512]: osdmap e673: 8 total, 8 up, 8 in 2026-03-10T13:51:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[58955]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:51:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:57.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:57 vm05 ceph-mon[58955]: osdmap e673: 8 total, 8 up, 8 in 2026-03-10T13:51:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:57 vm09 ceph-mon[53367]: pgmap v1031: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T13:51:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:57 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:51:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:57 vm09 ceph-mon[53367]: osdmap e673: 8 total, 8 up, 8 in 2026-03-10T13:51:58.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:58 vm09 ceph-mon[53367]: osdmap e674: 8 total, 8 up, 8 in 2026-03-10T13:51:58.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:58.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:58 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:58.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:51:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:51:59.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[58955]: osdmap e674: 8 total, 8 up, 8 in 2026-03-10T13:51:59.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:59.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[51512]: osdmap e674: 8 total, 8 up, 8 in 2026-03-10T13:51:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:58 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:51:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: osdmap e675: 8 total, 8 up, 8 in 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: osdmap e676: 8 total, 8 up, 8 in 2026-03-10T13:51:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:51:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:51:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: osdmap e675: 8 total, 8 up, 8 in 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: osdmap e676: 8 total, 8 up, 8 in 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: pgmap v1034: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: osdmap e675: 8 total, 8 up, 8 in 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: osdmap e676: 8 total, 8 up, 8 in 2026-03-10T13:51:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:51:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T13:52:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:51:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:51:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:01 vm09 ceph-mon[53367]: pgmap v1037: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:01 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:52:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:01 vm09 ceph-mon[53367]: osdmap e677: 8 total, 8 up, 8 in 2026-03-10T13:52:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:01 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:01 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[51512]: pgmap v1037: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[51512]: osdmap e677: 8 total, 8 up, 8 in 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[58955]: pgmap v1037: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[58955]: osdmap e677: 8 total, 8 up, 8 in 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:02.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:01 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: osdmap e678: 8 total, 8 up, 8 in 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: osdmap e679: 8 total, 8 up, 8 in 2026-03-10T13:52:02.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:02 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:52:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: osdmap e678: 8 total, 8 up, 8 in 2026-03-10T13:52:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: osdmap e679: 8 total, 8 up, 8 in 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: osdmap e678: 8 total, 8 up, 8 in 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: osdmap e679: 8 total, 8 up, 8 in 2026-03-10T13:52:03.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:02 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T13:52:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:03 vm09 ceph-mon[53367]: pgmap v1040: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:03 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:52:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:03 vm09 ceph-mon[53367]: osdmap e680: 8 total, 8 up, 8 in 2026-03-10T13:52:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[51512]: pgmap v1040: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:52:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[51512]: osdmap e680: 8 total, 8 up, 8 in 2026-03-10T13:52:04.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[58955]: pgmap v1040: 268 pgs: 16 creating+peering, 252 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:04.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T13:52:04.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:03 vm05 ceph-mon[58955]: osdmap e680: 8 total, 8 up, 8 in 2026-03-10T13:52:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:05 vm09 ceph-mon[53367]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:06.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:05 vm05 ceph-mon[51512]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:06.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:05 vm05 ceph-mon[58955]: pgmap v1043: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:06.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:06 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:07.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:06 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:07.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:06 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:07 vm05 ceph-mon[51512]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-10T13:52:08.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:07 vm05 ceph-mon[58955]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-10T13:52:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:07 vm09 ceph-mon[53367]: pgmap v1044: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 914 B/s rd, 0 op/s 2026-03-10T13:52:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:09.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:09.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:10.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:09 vm05 ceph-mon[51512]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 782 B/s rd, 0 op/s 2026-03-10T13:52:10.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:09 vm05 ceph-mon[58955]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 782 B/s rd, 0 op/s 2026-03-10T13:52:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:10.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:10.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:09 vm09 ceph-mon[53367]: pgmap v1045: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 782 B/s rd, 0 op/s 2026-03-10T13:52:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:12.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:11 vm05 ceph-mon[51512]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T13:52:12.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:11 vm05 ceph-mon[58955]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T13:52:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:11 vm09 ceph-mon[53367]: pgmap v1046: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T13:52:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:14 vm05 ceph-mon[51512]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 4.3 KiB/s wr, 1 op/s 2026-03-10T13:52:14.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:14 vm05 ceph-mon[58955]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 4.3 KiB/s wr, 1 op/s 2026-03-10T13:52:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:14 vm09 ceph-mon[53367]: pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 4.3 KiB/s wr, 1 op/s 2026-03-10T13:52:15.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:15 vm09 ceph-mon[53367]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 972 B/s rd, 7.9 KiB/s wr, 2 op/s 2026-03-10T13:52:15.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:15 vm05 ceph-mon[51512]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 972 B/s rd, 7.9 KiB/s wr, 2 op/s 2026-03-10T13:52:15.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:15 vm05 ceph-mon[58955]: pgmap v1048: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 972 B/s rd, 7.9 KiB/s wr, 2 op/s 2026-03-10T13:52:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:16.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]: dispatch 2026-03-10T13:52:17.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:17 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]': finished 2026-03-10T13:52:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:17 vm09 ceph-mon[53367]: osdmap e681: 8 total, 8 up, 8 in 2026-03-10T13:52:17.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:17 vm09 ceph-mon[53367]: pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]': finished 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[51512]: osdmap e681: 8 total, 8 up, 8 in 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[51512]: pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-140"}]': finished 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[58955]: osdmap e681: 8 total, 8 up, 8 in 2026-03-10T13:52:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:17 vm05 ceph-mon[58955]: pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 2 op/s 2026-03-10T13:52:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:18 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:18 vm05 ceph-mon[51512]: osdmap e682: 8 total, 8 up, 8 in 2026-03-10T13:52:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:18 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:18.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:18 vm05 ceph-mon[58955]: osdmap e682: 8 total, 8 up, 8 in 2026-03-10T13:52:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:18 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:18.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:18 vm09 ceph-mon[53367]: osdmap e682: 8 total, 8 up, 8 in 2026-03-10T13:52:19.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: osdmap e683: 8 total, 8 up, 8 in 2026-03-10T13:52:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[58955]: osdmap e684: 8 total, 8 up, 8 in 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: osdmap e683: 8 total, 8 up, 8 in 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:19.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:19 vm05 ceph-mon[51512]: osdmap e684: 8 total, 8 up, 8 in 2026-03-10T13:52:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: osdmap e683: 8 total, 8 up, 8 in 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: pgmap v1053: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:19 vm09 ceph-mon[53367]: osdmap e684: 8 total, 8 up, 8 in 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: osdmap e685: 8 total, 8 up, 8 in 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: osdmap e685: 8 total, 8 up, 8 in 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:20 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:20.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:20.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: osdmap e685: 8 total, 8 up, 8 in 2026-03-10T13:52:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:20 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[51512]: pgmap v1055: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[51512]: osdmap e686: 8 total, 8 up, 8 in 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[58955]: pgmap v1055: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[58955]: osdmap e686: 8 total, 8 up, 8 in 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:21.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:21 vm09 ceph-mon[53367]: pgmap v1055: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:21 vm09 ceph-mon[53367]: osdmap e686: 8 total, 8 up, 8 in 2026-03-10T13:52:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]: dispatch 2026-03-10T13:52:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]': finished 2026-03-10T13:52:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[51512]: osdmap e687: 8 total, 8 up, 8 in 2026-03-10T13:52:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:22.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:22.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]': finished 2026-03-10T13:52:22.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[58955]: osdmap e687: 8 total, 8 up, 8 in 2026-03-10T13:52:22.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:22.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:22 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-142", "mode": "writeback"}]': finished 2026-03-10T13:52:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:22 vm09 ceph-mon[53367]: osdmap e687: 8 total, 8 up, 8 in 2026-03-10T13:52:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: pgmap v1058: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: osdmap e688: 8 total, 8 up, 8 in 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: pgmap v1058: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: osdmap e688: 8 total, 8 up, 8 in 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: pgmap v1058: 268 pgs: 32 creating+peering, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: osdmap e688: 8 total, 8 up, 8 in 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[58955]: pgmap v1061: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[58955]: osdmap e689: 8 total, 8 up, 8 in 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[51512]: pgmap v1061: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[51512]: osdmap e689: 8 total, 8 up, 8 in 2026-03-10T13:52:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:25.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:25 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:25 vm09 ceph-mon[53367]: pgmap v1061: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:25 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:25 vm09 ceph-mon[53367]: osdmap e689: 8 total, 8 up, 8 in 2026-03-10T13:52:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:25 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[51512]: osdmap e690: 8 total, 8 up, 8 in 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:26.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[58955]: osdmap e690: 8 total, 8 up, 8 in 2026-03-10T13:52:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:26.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:26 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:26 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:26 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:26 vm09 ceph-mon[53367]: osdmap e690: 8 total, 8 up, 8 in 2026-03-10T13:52:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:26 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:26.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:26 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: osdmap e691: 8 total, 8 up, 8 in 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: osdmap e691: 8 total, 8 up, 8 in 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:27 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: pgmap v1064: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: osdmap e691: 8 total, 8 up, 8 in 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T13:52:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:27 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[51512]: osdmap e692: 8 total, 8 up, 8 in 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[58955]: osdmap e692: 8 total, 8 up, 8 in 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:28.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T13:52:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:28 vm09 ceph-mon[53367]: osdmap e692: 8 total, 8 up, 8 in 2026-03-10T13:52:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:28.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T13:52:29.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: osdmap e693: 8 total, 8 up, 8 in 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: osdmap e693: 8 total, 8 up, 8 in 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: pgmap v1067: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: osdmap e693: 8 total, 8 up, 8 in 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:30.315 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[58955]: osdmap e694: 8 total, 8 up, 8 in 2026-03-10T13:52:30.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:30.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[51512]: osdmap e694: 8 total, 8 up, 8 in 2026-03-10T13:52:30.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:30 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:30 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:30 vm09 ceph-mon[53367]: osdmap e694: 8 total, 8 up, 8 in 2026-03-10T13:52:30.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:30 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: osdmap e695: 8 total, 8 up, 8 in 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: osdmap e695: 8 total, 8 up, 8 in 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:31 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: pgmap v1070: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]': finished 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: osdmap e695: 8 total, 8 up, 8 in 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:31 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-142"}]: dispatch 2026-03-10T13:52:32.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:32 vm05 ceph-mon[51512]: osdmap e696: 8 total, 8 up, 8 in 2026-03-10T13:52:32.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:32 vm05 ceph-mon[58955]: osdmap e696: 8 total, 8 up, 8 in 2026-03-10T13:52:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:32 vm09 ceph-mon[53367]: osdmap e696: 8 total, 8 up, 8 in 2026-03-10T13:52:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:33 vm09 ceph-mon[53367]: pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:33 vm09 ceph-mon[53367]: osdmap e697: 8 total, 8 up, 8 in 2026-03-10T13:52:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:33 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:33 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[51512]: pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[51512]: osdmap e697: 8 total, 8 up, 8 in 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[58955]: pgmap v1073: 236 pgs: 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[58955]: osdmap e697: 8 total, 8 up, 8 in 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:33.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:33 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:34.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:34 vm09 ceph-mon[53367]: osdmap e698: 8 total, 8 up, 8 in 2026-03-10T13:52:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:34 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:34.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:34 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[51512]: osdmap e698: 8 total, 8 up, 8 in 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[58955]: osdmap e698: 8 total, 8 up, 8 in 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:34.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:34 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:35 vm09 ceph-mon[53367]: pgmap v1076: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:35 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:35 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:35 vm09 ceph-mon[53367]: osdmap e699: 8 total, 8 up, 8 in 2026-03-10T13:52:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:35 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[51512]: pgmap v1076: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[51512]: osdmap e699: 8 total, 8 up, 8 in 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[58955]: pgmap v1076: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[58955]: osdmap e699: 8 total, 8 up, 8 in 2026-03-10T13:52:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:35 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:36 vm09 ceph-mon[53367]: osdmap e700: 8 total, 8 up, 8 in 2026-03-10T13:52:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:36 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:36.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:36 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[58955]: osdmap e700: 8 total, 8 up, 8 in 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[51512]: osdmap e700: 8 total, 8 up, 8 in 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:36 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]: dispatch 2026-03-10T13:52:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:37 vm09 ceph-mon[53367]: pgmap v1079: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:37 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:37 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]': finished 2026-03-10T13:52:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:37 vm09 ceph-mon[53367]: osdmap e701: 8 total, 8 up, 8 in 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[58955]: pgmap v1079: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]': finished 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[58955]: osdmap e701: 8 total, 8 up, 8 in 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[51512]: pgmap v1079: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-144", "mode": "readproxy"}]': finished 2026-03-10T13:52:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:37 vm05 ceph-mon[51512]: osdmap e701: 8 total, 8 up, 8 in 2026-03-10T13:52:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:38.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:39.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:39 vm05 ceph-mon[58955]: pgmap v1081: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T13:52:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:39 vm05 ceph-mon[51512]: pgmap v1081: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T13:52:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:39 vm09 ceph-mon[53367]: pgmap v1081: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T13:52:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:41 vm05 ceph-mon[58955]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:52:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:41 vm05 ceph-mon[51512]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:52:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:41 vm09 ceph-mon[53367]: pgmap v1082: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T13:52:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:43 vm05 ceph-mon[58955]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 784 B/s rd, 130 B/s wr, 0 op/s 2026-03-10T13:52:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:43 vm05 ceph-mon[51512]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 784 B/s rd, 130 B/s wr, 0 op/s 2026-03-10T13:52:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:43 vm09 ceph-mon[53367]: pgmap v1083: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 784 B/s rd, 130 B/s wr, 0 op/s 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:52:44.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:52:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:52:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:52:44.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:45.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:45 vm09 ceph-mon[53367]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 115 B/s wr, 1 op/s 2026-03-10T13:52:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:45 vm05 ceph-mon[58955]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 115 B/s wr, 1 op/s 2026-03-10T13:52:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:45 vm05 ceph-mon[51512]: pgmap v1084: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 115 B/s wr, 1 op/s 2026-03-10T13:52:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:46 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:46.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:46 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:46 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:46.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:46 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:46 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:46.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:46 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[58955]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[58955]: osdmap e702: 8 total, 8 up, 8 in 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[51512]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-10T13:52:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[51512]: osdmap e702: 8 total, 8 up, 8 in 2026-03-10T13:52:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:47.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:47 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:47 vm09 ceph-mon[53367]: pgmap v1085: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 102 B/s wr, 1 op/s 2026-03-10T13:52:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:52:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:47 vm09 ceph-mon[53367]: osdmap e702: 8 total, 8 up, 8 in 2026-03-10T13:52:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:47 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:47 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: osdmap e703: 8 total, 8 up, 8 in 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: osdmap e703: 8 total, 8 up, 8 in 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:48 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]': finished 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: osdmap e703: 8 total, 8 up, 8 in 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:48.862 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:48 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-144"}]: dispatch 2026-03-10T13:52:49.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[58955]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:52:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[58955]: osdmap e704: 8 total, 8 up, 8 in 2026-03-10T13:52:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[51512]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:52:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[51512]: osdmap e704: 8 total, 8 up, 8 in 2026-03-10T13:52:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:49 vm09 ceph-mon[53367]: pgmap v1088: 268 pgs: 268 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T13:52:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:49 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:49 vm09 ceph-mon[53367]: osdmap e704: 8 total, 8 up, 8 in 2026-03-10T13:52:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:52:51.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:50 vm09 ceph-mon[53367]: osdmap e705: 8 total, 8 up, 8 in 2026-03-10T13:52:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:50 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:50 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[58955]: osdmap e705: 8 total, 8 up, 8 in 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[51512]: osdmap e705: 8 total, 8 up, 8 in 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:50 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:52:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:51 vm09 ceph-mon[53367]: pgmap v1091: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:51 vm09 ceph-mon[53367]: osdmap e706: 8 total, 8 up, 8 in 2026-03-10T13:52:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:51 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:51 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[58955]: pgmap v1091: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[58955]: osdmap e706: 8 total, 8 up, 8 in 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[51512]: pgmap v1091: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[51512]: osdmap e706: 8 total, 8 up, 8 in 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:51 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:52:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: osdmap e707: 8 total, 8 up, 8 in 2026-03-10T13:52:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[58955]: osdmap e708: 8 total, 8 up, 8 in 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: osdmap e707: 8 total, 8 up, 8 in 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:52:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:52 vm05 ceph-mon[51512]: osdmap e708: 8 total, 8 up, 8 in 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: osdmap e707: 8 total, 8 up, 8 in 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm05-91276-111", "overlaypool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:52:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:52 vm09 ceph-mon[53367]: osdmap e708: 8 total, 8 up, 8 in 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: pgmap v1094: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]': finished 2026-03-10T13:52:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[58955]: osdmap e709: 8 total, 8 up, 8 in 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: pgmap v1094: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]': finished 2026-03-10T13:52:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:53 vm05 ceph-mon[51512]: osdmap e709: 8 total, 8 up, 8 in 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: pgmap v1094: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]: dispatch 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm05-91276-146", "mode": "writeback"}]': finished 2026-03-10T13:52:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:53 vm09 ceph-mon[53367]: osdmap e709: 8 total, 8 up, 8 in 2026-03-10T13:52:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:54 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:55.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:54 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:55.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:54 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:55.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:54 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:55.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:54 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:54 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[58955]: pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[58955]: osdmap e710: 8 total, 8 up, 8 in 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[51512]: pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[51512]: osdmap e710: 8 total, 8 up, 8 in 2026-03-10T13:52:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:56 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:56 vm09 ceph-mon[53367]: pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T13:52:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:56 vm09 ceph-mon[53367]: osdmap e710: 8 total, 8 up, 8 in 2026-03-10T13:52:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:56 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:56.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:56 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: osdmap e711: 8 total, 8 up, 8 in 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:57.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: osdmap e711: 8 total, 8 up, 8 in 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:57.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:57 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: osdmap e711: 8 total, 8 up, 8 in 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: pgmap v1100: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:52:57.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:57 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T13:52:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[58955]: osdmap e712: 8 total, 8 up, 8 in 2026-03-10T13:52:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:58.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:58.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[51512]: osdmap e712: 8 total, 8 up, 8 in 2026-03-10T13:52:58.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:58.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:58 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:58.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:58 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T13:52:58.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:58 vm09 ceph-mon[53367]: osdmap e712: 8 total, 8 up, 8 in 2026-03-10T13:52:58.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:58 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:58.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:58 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T13:52:59.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: osdmap e713: 8 total, 8 up, 8 in 2026-03-10T13:52:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:59.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:52:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:59.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:52:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: osdmap e713: 8 total, 8 up, 8 in 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T13:52:59.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: osdmap e713: 8 total, 8 up, 8 in 2026-03-10T13:52:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T13:52:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: pgmap v1103: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:52:59.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:52:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:00 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:53:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:00 vm05 ceph-mon[58955]: osdmap e714: 8 total, 8 up, 8 in 2026-03-10T13:53:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:00 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:53:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:00 vm05 ceph-mon[51512]: osdmap e714: 8 total, 8 up, 8 in 2026-03-10T13:53:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:52:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:52:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:00 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T13:53:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:00 vm09 ceph-mon[53367]: osdmap e714: 8 total, 8 up, 8 in 2026-03-10T13:53:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:01 vm09 ceph-mon[53367]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:01 vm05 ceph-mon[58955]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:01.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:01 vm05 ceph-mon[51512]: pgmap v1105: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:02.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:02 vm09 ceph-mon[53367]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:53:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:02 vm05 ceph-mon[58955]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:53:02.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:02 vm05 ceph-mon[51512]: Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T13:53:03.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:03 vm09 ceph-mon[53367]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:03 vm05 ceph-mon[58955]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:03.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:03 vm05 ceph-mon[51512]: pgmap v1106: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:05 vm05 ceph-mon[58955]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:05 vm05 ceph-mon[51512]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:05 vm09 ceph-mon[53367]: pgmap v1107: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T13:53:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:07 vm05 ceph-mon[58955]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:07.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:07 vm05 ceph-mon[51512]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:07.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:07 vm09 ceph-mon[53367]: pgmap v1108: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:09.167 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:53:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:09.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:09 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:09 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[58955]: osdmap e715: 8 total, 8 up, 8 in 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[51512]: osdmap e715: 8 total, 8 up, 8 in 2026-03-10T13:53:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:10 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:10 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:10 vm09 ceph-mon[53367]: osdmap e715: 8 total, 8 up, 8 in 2026-03-10T13:53:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:10 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:10.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:10 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: osdmap e716: 8 total, 8 up, 8 in 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: osdmap e716: 8 total, 8 up, 8 in 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:11 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]': finished 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: osdmap e716: 8 total, 8 up, 8 in 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:11 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-146"}]: dispatch 2026-03-10T13:53:12.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:12.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[58955]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:53:12.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[58955]: osdmap e717: 8 total, 8 up, 8 in 2026-03-10T13:53:12.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:12.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[51512]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:53:12.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:12 vm05 ceph-mon[51512]: osdmap e717: 8 total, 8 up, 8 in 2026-03-10T13:53:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:12 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:12 vm09 ceph-mon[53367]: Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T13:53:12.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:12 vm09 ceph-mon[53367]: osdmap e717: 8 total, 8 up, 8 in 2026-03-10T13:53:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[58955]: osdmap e718: 8 total, 8 up, 8 in 2026-03-10T13:53:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[58955]: pgmap v1115: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T13:53:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[51512]: osdmap e718: 8 total, 8 up, 8 in 2026-03-10T13:53:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[51512]: pgmap v1115: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T13:53:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:13.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:13 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:13 vm09 ceph-mon[53367]: osdmap e718: 8 total, 8 up, 8 in 2026-03-10T13:53:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:13 vm09 ceph-mon[53367]: pgmap v1115: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T13:53:13.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:13 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:13.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:13 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[58955]: osdmap e719: 8 total, 8 up, 8 in 2026-03-10T13:53:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:14.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[51512]: osdmap e719: 8 total, 8 up, 8 in 2026-03-10T13:53:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:14.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:14 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:14 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:14 vm09 ceph-mon[53367]: osdmap e719: 8 total, 8 up, 8 in 2026-03-10T13:53:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:14 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:14.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:14 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T13:53:15.575 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:15 vm09 ceph-mon[53367]: pgmap v1117: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 254 B/s rd, 0 op/s 2026-03-10T13:53:15.575 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:15 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:53:15.575 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:15 vm09 ceph-mon[53367]: osdmap e720: 8 total, 8 up, 8 in 2026-03-10T13:53:15.575 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:15 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:15.575 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:15 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[58955]: pgmap v1117: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 254 B/s rd, 0 op/s 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[58955]: osdmap e720: 8 total, 8 up, 8 in 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[51512]: pgmap v1117: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 254 B/s rd, 0 op/s 2026-03-10T13:53:15.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T13:53:15.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[51512]: osdmap e720: 8 total, 8 up, 8 in 2026-03-10T13:53:15.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:15.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:15 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]': finished 2026-03-10T13:53:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: osdmap e721: 8 total, 8 up, 8 in 2026-03-10T13:53:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]': finished 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: osdmap e721: 8 total, 8 up, 8 in 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:16 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]': finished 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: osdmap e721: 8 total, 8 up, 8 in 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:16 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-148"}]: dispatch 2026-03-10T13:53:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[58955]: pgmap v1120: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:53:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:17.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[58955]: osdmap e722: 8 total, 8 up, 8 in 2026-03-10T13:53:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[51512]: pgmap v1120: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:53:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:17 vm05 ceph-mon[51512]: osdmap e722: 8 total, 8 up, 8 in 2026-03-10T13:53:17.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:17 vm09 ceph-mon[53367]: pgmap v1120: 268 pgs: 2 creating+activating, 266 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T13:53:17.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:17 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:17.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:17 vm09 ceph-mon[53367]: osdmap e722: 8 total, 8 up, 8 in 2026-03-10T13:53:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[58955]: osdmap e723: 8 total, 8 up, 8 in 2026-03-10T13:53:18.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:18.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:18.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[51512]: osdmap e723: 8 total, 8 up, 8 in 2026-03-10T13:53:18.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:18.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:18 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:18.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:18 vm09 ceph-mon[53367]: osdmap e723: 8 total, 8 up, 8 in 2026-03-10T13:53:18.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:18 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:18.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:18 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:19.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: osdmap e724: 8 total, 8 up, 8 in 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: osdmap e724: 8 total, 8 up, 8 in 2026-03-10T13:53:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: pgmap v1123: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: osdmap e724: 8 total, 8 up, 8 in 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-150"}]: dispatch 2026-03-10T13:53:19.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:20 vm09 ceph-mon[53367]: osdmap e725: 8 total, 8 up, 8 in 2026-03-10T13:53:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:20 vm05 ceph-mon[58955]: osdmap e725: 8 total, 8 up, 8 in 2026-03-10T13:53:20.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:20 vm05 ceph-mon[51512]: osdmap e725: 8 total, 8 up, 8 in 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: osdmap e726: 8 total, 8 up, 8 in 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:21 vm09 ceph-mon[53367]: osdmap e727: 8 total, 8 up, 8 in 2026-03-10T13:53:21.674 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:53:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=sqlstore.transactions t=2026-03-10T13:53:21.511427782Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T13:53:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: osdmap e726: 8 total, 8 up, 8 in 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[58955]: osdmap e727: 8 total, 8 up, 8 in 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: pgmap v1126: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: osdmap e726: 8 total, 8 up, 8 in 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:21 vm05 ceph-mon[51512]: osdmap e727: 8 total, 8 up, 8 in 2026-03-10T13:53:22.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:22 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:22 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:22.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:22.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:22 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-152"}]: dispatch 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: pgmap v1129: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: osdmap e728: 8 total, 8 up, 8 in 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: osdmap e729: 8 total, 8 up, 8 in 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: pgmap v1129: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:53:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: osdmap e728: 8 total, 8 up, 8 in 2026-03-10T13:53:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: osdmap e729: 8 total, 8 up, 8 in 2026-03-10T13:53:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:23 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: pgmap v1129: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T13:53:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: osdmap e728: 8 total, 8 up, 8 in 2026-03-10T13:53:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: osdmap e729: 8 total, 8 up, 8 in 2026-03-10T13:53:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:23 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[58955]: osdmap e730: 8 total, 8 up, 8 in 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[51512]: osdmap e730: 8 total, 8 up, 8 in 2026-03-10T13:53:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:24 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:53:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm05-91276-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T13:53:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:24 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:24 vm09 ceph-mon[53367]: osdmap e730: 8 total, 8 up, 8 in 2026-03-10T13:53:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:24 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm05-91276-111","var": "dedup_tier","val": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: pgmap v1132: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[58955]: osdmap e731: 8 total, 8 up, 8 in 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: pgmap v1132: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:25 vm05 ceph-mon[51512]: osdmap e731: 8 total, 8 up, 8 in 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: pgmap v1132: 268 pgs: 19 creating+peering, 13 unknown, 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm05-91276-111", "tierpool": "test-rados-api-vm05-91276-154"}]: dispatch 2026-03-10T13:53:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:25 vm09 ceph-mon[53367]: osdmap e731: 8 total, 8 up, 8 in 2026-03-10T13:53:26.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:26 vm09 ceph-mon[53367]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:26 vm05 ceph-mon[58955]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:26 vm05 ceph-mon[51512]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:27 vm09 ceph-mon[53367]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:27 vm09 ceph-mon[53367]: osdmap e732: 8 total, 8 up, 8 in 2026-03-10T13:53:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:27 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:27 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[58955]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[58955]: osdmap e732: 8 total, 8 up, 8 in 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[51512]: pgmap v1135: 236 pgs: 236 active+clean; 4.3 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 4 op/s 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[51512]: osdmap e732: 8 total, 8 up, 8 in 2026-03-10T13:53:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:27 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (7687 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (13247 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8122 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (13163 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (7495 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (8183 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: ok, hit_set contains 329:602f83fe:::foo:head 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (9094 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first is 1773150724 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,1773150732,1773150733,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,1773150732,1773150733,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150724,1773150726,1773150727,1773150729,1773150730,1773150732,1773150733,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: got ls 1773150727,1773150729,1773150730,1773150732,1773150733,1773150735,1773150736,0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: first now 1773150727, trimmed 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (20620 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: foo0 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14181 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (17216 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (22632 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5074 ms) 2026-03-10T13:53:28.667 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3089 ms) 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (3043 ms) 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3011 ms) 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (231085 ms total) 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (850133 ms total) 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91006 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91006 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91289 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91289 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91842 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91842 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91580 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91580 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91397 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91397 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91137 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91137 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91504 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91504 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92015 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92015 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92066 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92066 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91173 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91173 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91888 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91888 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90978 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 90978 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91055 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91055 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91736 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91736 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90968 2026-03-10T13:53:28.668 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 90968 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90989 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 90989 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92225 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92225 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91344 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91344 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92310 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92310 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91461 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91461 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91966 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91966 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92359 2026-03-10T13:53:28.669 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92359 2026-03-10T13:53:28.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:28.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:28 vm09 ceph-mon[53367]: osdmap e733: 8 total, 8 up, 8 in 2026-03-10T13:53:28.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:28 vm09 ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:28 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:28.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:53:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[58955]: osdmap e733: 8 total, 8 up, 8 in 2026-03-10T13:53:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:29.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:29.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[51512]: osdmap e733: 8 total, 8 up, 8 in 2026-03-10T13:53:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3647381632' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:29.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:28 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]: dispatch 2026-03-10T13:53:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:29 vm09 ceph-mon[53367]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:29 vm09 ceph-mon[53367]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:29 vm09 ceph-mon[53367]: osdmap e734: 8 total, 8 up, 8 in 2026-03-10T13:53:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[58955]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[58955]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[58955]: osdmap e734: 8 total, 8 up, 8 in 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[51512]: pgmap v1138: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[51512]: from='client.49994 ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm05-91276-111"}]': finished 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[51512]: osdmap e734: 8 total, 8 up, 8 in 2026-03-10T13:53:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:31 vm09 ceph-mon[53367]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:31 vm09 ceph-mon[53367]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:31 vm05 ceph-mon[58955]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:31 vm05 ceph-mon[58955]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:31 vm05 ceph-mon[51512]: pgmap v1140: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:31 vm05 ceph-mon[51512]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T13:53:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:33 vm05 ceph-mon[58955]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:33 vm05 ceph-mon[51512]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:33 vm09 ceph-mon[53367]: pgmap v1141: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T13:53:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:35 vm05 ceph-mon[58955]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:53:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:35 vm05 ceph-mon[51512]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:53:36.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:35 vm09 ceph-mon[53367]: pgmap v1142: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T13:53:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:37 vm05 ceph-mon[58955]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:37 vm05 ceph-mon[51512]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:38.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:37 vm09 ceph-mon[53367]: pgmap v1143: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:39.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:39.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:53:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:39 vm05 ceph-mon[58955]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:53:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:39 vm05 ceph-mon[51512]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:53:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:40.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:39 vm09 ceph-mon[53367]: pgmap v1144: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T13:53:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:41 vm05 ceph-mon[58955]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-10T13:53:42.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:41 vm05 ceph-mon[51512]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-10T13:53:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:41 vm09 ceph-mon[53367]: pgmap v1145: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 887 B/s rd, 0 op/s 2026-03-10T13:53:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:43 vm09 ceph-mon[53367]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:43 vm05 ceph-mon[51512]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:43 vm05 ceph-mon[58955]: pgmap v1146: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:45.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:53:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:53:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:53:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:45.231 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:53:45.231 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:53:45.231 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:53:45.232 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:45.232 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:53:45.232 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:53:45.232 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:53:45.232 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:53:46.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:45 vm09 ceph-mon[53367]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:45 vm05 ceph-mon[58955]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:45 vm05 ceph-mon[51512]: pgmap v1147: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:48.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:47 vm09 ceph-mon[53367]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:47 vm05 ceph-mon[58955]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:47 vm05 ceph-mon[51512]: pgmap v1148: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:49.173 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:53:50.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:49 vm09 ceph-mon[53367]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:49 vm05 ceph-mon[58955]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:49 vm05 ceph-mon[51512]: pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:53:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:53:52.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:51 vm09 ceph-mon[53367]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:51 vm05 ceph-mon[58955]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:51 vm05 ceph-mon[51512]: pgmap v1150: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:53 vm09 ceph-mon[53367]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:53 vm05 ceph-mon[58955]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:53 vm05 ceph-mon[51512]: pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:53:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:55 vm09 ceph-mon[53367]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:55 vm05 ceph-mon[58955]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:55 vm05 ceph-mon[51512]: pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:53:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:57 vm09 ceph-mon[53367]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:57 vm05 ceph-mon[58955]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:57 vm05 ceph-mon[51512]: pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:53:59.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:53:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:59 vm09 ceph-mon[53367]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:00.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:53:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:59 vm05 ceph-mon[58955]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:53:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:59 vm05 ceph-mon[51512]: pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:53:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:53:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:53:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:01 vm09 ceph-mon[53367]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:01 vm05 ceph-mon[58955]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:01 vm05 ceph-mon[51512]: pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:03 vm09 ceph-mon[53367]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:03 vm05 ceph-mon[58955]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:03 vm05 ceph-mon[51512]: pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:05 vm05 ceph-mon[58955]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:05 vm05 ceph-mon[51512]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:05 vm09 ceph-mon[53367]: pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:07 vm05 ceph-mon[58955]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:08.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:07 vm05 ceph-mon[51512]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:07 vm09 ceph-mon[53367]: pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:09.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:09 vm05 ceph-mon[58955]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:09 vm05 ceph-mon[51512]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:09 vm09 ceph-mon[53367]: pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:11 vm05 ceph-mon[58955]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:11 vm05 ceph-mon[51512]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:11 vm09 ceph-mon[53367]: pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:13 vm05 ceph-mon[58955]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:13 vm05 ceph-mon[51512]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:13 vm09 ceph-mon[53367]: pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:16.226 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:15 vm09 ceph-mon[53367]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:15 vm05 ceph-mon[58955]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:15 vm05 ceph-mon[51512]: pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:18 vm09 ceph-mon[53367]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:18 vm05 ceph-mon[51512]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:18 vm05 ceph-mon[58955]: pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:19 vm09 ceph-mon[53367]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:19.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:19.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:19 vm05 ceph-mon[58955]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:19 vm05 ceph-mon[51512]: pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:21 vm05 ceph-mon[58955]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:21 vm05 ceph-mon[51512]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:21 vm09 ceph-mon[53367]: pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:23 vm05 ceph-mon[58955]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:23.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:23 vm05 ceph-mon[51512]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:23.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:23 vm09 ceph-mon[53367]: pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:25 vm05 ceph-mon[58955]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:25 vm05 ceph-mon[51512]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:25 vm09 ceph-mon[53367]: pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:27.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:27 vm05 ceph-mon[58955]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:27.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:27 vm05 ceph-mon[51512]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:27 vm09 ceph-mon[53367]: pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:29.328 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:29 vm05 ceph-mon[58955]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:29.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:29 vm05 ceph-mon[51512]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:29.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:29 vm09 ceph-mon[53367]: pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:31 vm09 ceph-mon[53367]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:31 vm05 ceph-mon[58955]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:31 vm05 ceph-mon[51512]: pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:33 vm09 ceph-mon[53367]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:33 vm05 ceph-mon[58955]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:33 vm05 ceph-mon[51512]: pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:35 vm09 ceph-mon[53367]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:35 vm05 ceph-mon[58955]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:35 vm05 ceph-mon[51512]: pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:37.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:37 vm09 ceph-mon[53367]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:37 vm05 ceph-mon[58955]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:37 vm05 ceph-mon[51512]: pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:38.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:38.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:39.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:39 vm09 ceph-mon[53367]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:39 vm05 ceph-mon[58955]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:39.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:39.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:39 vm05 ceph-mon[51512]: pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:39.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:41 vm09 ceph-mon[53367]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:41 vm05 ceph-mon[58955]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:41 vm05 ceph-mon[51512]: pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:43 vm05 ceph-mon[58955]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:44.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:43 vm05 ceph-mon[51512]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:43 vm09 ceph-mon[53367]: pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:54:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:44 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:54:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:54:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:54:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:54:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:44 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:54:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:45 vm09 ceph-mon[53367]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:45 vm05 ceph-mon[58955]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:45 vm05 ceph-mon[51512]: pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:48 vm05 ceph-mon[58955]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:48 vm05 ceph-mon[51512]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:48 vm09 ceph-mon[53367]: pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:49 vm05 ceph-mon[58955]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:49 vm05 ceph-mon[51512]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:49.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:49 vm09 ceph-mon[53367]: pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:49.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:49.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:54:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:51 vm05 ceph-mon[58955]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:51 vm05 ceph-mon[51512]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:51 vm09 ceph-mon[53367]: pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:53 vm05 ceph-mon[58955]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:53.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:53 vm05 ceph-mon[51512]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:53 vm09 ceph-mon[53367]: pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:54:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:55 vm05 ceph-mon[58955]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:55.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:55 vm05 ceph-mon[51512]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:55 vm09 ceph-mon[53367]: pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:54:57.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:57 vm05 ceph-mon[58955]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:57.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:57 vm05 ceph-mon[51512]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:57.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:57 vm09 ceph-mon[53367]: pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:59.352 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:54:58 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:54:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:59 vm09 ceph-mon[53367]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:54:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:59 vm05 ceph-mon[58955]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:54:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:54:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:59 vm05 ceph-mon[51512]: pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:54:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:54:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:54:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:54:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:01.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:01 vm09 ceph-mon[53367]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:01 vm05 ceph-mon[58955]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:01 vm05 ceph-mon[51512]: pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:03 vm09 ceph-mon[53367]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:03 vm05 ceph-mon[58955]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:03 vm05 ceph-mon[51512]: pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:05 vm09 ceph-mon[53367]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:05 vm05 ceph-mon[58955]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:05 vm05 ceph-mon[51512]: pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:07.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:07 vm09 ceph-mon[53367]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:07 vm05 ceph-mon[58955]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:07 vm05 ceph-mon[51512]: pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:08.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:09.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:08 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:55:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:09 vm05 ceph-mon[58955]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:09 vm05 ceph-mon[51512]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:09 vm09 ceph-mon[53367]: pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:11 vm05 ceph-mon[58955]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:11 vm05 ceph-mon[51512]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:11 vm09 ceph-mon[53367]: pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:13 vm05 ceph-mon[58955]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:13 vm05 ceph-mon[51512]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:13 vm09 ceph-mon[53367]: pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:15 vm09 ceph-mon[53367]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:15 vm05 ceph-mon[58955]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:16.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:15 vm05 ceph-mon[51512]: pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:17 vm09 ceph-mon[53367]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:17 vm05 ceph-mon[58955]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:17 vm05 ceph-mon[51512]: pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:19.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:18 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:55:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:19 vm05 ceph-mon[58955]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:20.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:19 vm05 ceph-mon[51512]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:20.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:20.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:19 vm09 ceph-mon[53367]: pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:20.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:21 vm05 ceph-mon[58955]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:21 vm05 ceph-mon[51512]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:22.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:21 vm09 ceph-mon[53367]: pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:23 vm05 ceph-mon[58955]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:23 vm05 ceph-mon[51512]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:23 vm09 ceph-mon[53367]: pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:25 vm09 ceph-mon[53367]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:26.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:25 vm05 ceph-mon[58955]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:26.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:25 vm05 ceph-mon[51512]: pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:27 vm09 ceph-mon[53367]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:27 vm05 ceph-mon[58955]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:27 vm05 ceph-mon[51512]: pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:29.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:28 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:55:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:29 vm09 ceph-mon[53367]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:29 vm05 ceph-mon[58955]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:29 vm05 ceph-mon[51512]: pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:31 vm09 ceph-mon[53367]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:31 vm05 ceph-mon[58955]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:31 vm05 ceph-mon[51512]: pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:33 vm09 ceph-mon[53367]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:33 vm05 ceph-mon[58955]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:33 vm05 ceph-mon[51512]: pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:36.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:35 vm09 ceph-mon[53367]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:35 vm05 ceph-mon[58955]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:35 vm05 ceph-mon[51512]: pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:38.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:37 vm09 ceph-mon[53367]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:37 vm05 ceph-mon[58955]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:37 vm05 ceph-mon[51512]: pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:39.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:38 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:55:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:39 vm05 ceph-mon[51512]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:39 vm05 ceph-mon[58955]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:39 vm09 ceph-mon[53367]: pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:41 vm05 ceph-mon[51512]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:41 vm05 ceph-mon[58955]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:41 vm09 ceph-mon[53367]: pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:43 vm05 ceph-mon[51512]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:43 vm05 ceph-mon[58955]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:43 vm09 ceph-mon[53367]: pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:45.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:44 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:55:45.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:44 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:55:45.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:44 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[51512]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[58955]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:55:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:55:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:45 vm09 ceph-mon[53367]: pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:55:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:55:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:55:48.225 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:47 vm09 ceph-mon[53367]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:47 vm05 ceph-mon[58955]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:47 vm05 ceph-mon[51512]: pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:49.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:48 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:55:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:49 vm05 ceph-mon[51512]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:55:50.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:49 vm05 ceph-mon[58955]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:50.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:49 vm09 ceph-mon[53367]: pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:55:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:51 vm05 ceph-mon[58955]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:51 vm05 ceph-mon[51512]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:51 vm09 ceph-mon[53367]: pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:53 vm05 ceph-mon[58955]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:53 vm05 ceph-mon[51512]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:53 vm09 ceph-mon[53367]: pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:55:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:55 vm05 ceph-mon[58955]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:55 vm05 ceph-mon[51512]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:55 vm09 ceph-mon[53367]: pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:55:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:55:57 vm05 ceph-mon[58955]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:55:57 vm05 ceph-mon[51512]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:58.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:55:57 vm09 ceph-mon[53367]: pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:55:59.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:55:59 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:00 vm05 ceph-mon[58955]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:00 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:00 vm05 ceph-mon[51512]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:00 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:55:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:55:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:00 vm09 ceph-mon[53367]: pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:00 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:01 vm09 ceph-mon[53367]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:01 vm05 ceph-mon[58955]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:01.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:01 vm05 ceph-mon[51512]: pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:03.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:03 vm05 ceph-mon[58955]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:03.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:03 vm05 ceph-mon[51512]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:03 vm09 ceph-mon[53367]: pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:05 vm05 ceph-mon[51512]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:05 vm05 ceph-mon[58955]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:05 vm09 ceph-mon[53367]: pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:07 vm05 ceph-mon[51512]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:07 vm05 ceph-mon[58955]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:07.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:07 vm09 ceph-mon[53367]: pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:08.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:08.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:09.322 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:09 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:09 vm05 ceph-mon[51512]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:09 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:09 vm05 ceph-mon[58955]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:09 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:09 vm09 ceph-mon[53367]: pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:09 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:11 vm05 ceph-mon[51512]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:11 vm05 ceph-mon[58955]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:11 vm09 ceph-mon[53367]: pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:13 vm09 ceph-mon[53367]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:13 vm05 ceph-mon[51512]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:13 vm05 ceph-mon[58955]: pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:15.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:15 vm09 ceph-mon[53367]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:15 vm05 ceph-mon[51512]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:15 vm05 ceph-mon[58955]: pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:17.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:17 vm09 ceph-mon[53367]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:17 vm05 ceph-mon[51512]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:17 vm05 ceph-mon[58955]: pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:19.383 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:19 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:19 vm09 ceph-mon[53367]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:19 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:19 vm05 ceph-mon[51512]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:19 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:19 vm05 ceph-mon[58955]: pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:19 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:21 vm09 ceph-mon[53367]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:21 vm05 ceph-mon[58955]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:21 vm05 ceph-mon[51512]: pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:23 vm09 ceph-mon[53367]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:23 vm05 ceph-mon[58955]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:23 vm05 ceph-mon[51512]: pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:25 vm09 ceph-mon[53367]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:25 vm05 ceph-mon[58955]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:25 vm05 ceph-mon[51512]: pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:27.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:27 vm09 ceph-mon[53367]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:27 vm05 ceph-mon[58955]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:27 vm05 ceph-mon[51512]: pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:29.407 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:29 vm09 ceph-mon[53367]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:29 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:29 vm05 ceph-mon[58955]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:29 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:29 vm05 ceph-mon[51512]: pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:29 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:31.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:31 vm09 ceph-mon[53367]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:31 vm05 ceph-mon[58955]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:31 vm05 ceph-mon[51512]: pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:33 vm09 ceph-mon[53367]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:33 vm05 ceph-mon[58955]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:33 vm05 ceph-mon[51512]: pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:35.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:35 vm09 ceph-mon[53367]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:35 vm05 ceph-mon[58955]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:35 vm05 ceph-mon[51512]: pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:37 vm05 ceph-mon[58955]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:37 vm05 ceph-mon[51512]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:37 vm09 ceph-mon[53367]: pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:38.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:38.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:39.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:39 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:39 vm05 ceph-mon[58955]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:39 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:39 vm05 ceph-mon[51512]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:39 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:39 vm09 ceph-mon[53367]: pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:39 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:41 vm05 ceph-mon[58955]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:41.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:41 vm05 ceph-mon[51512]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:41.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:41 vm09 ceph-mon[53367]: pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:43 vm05 ceph-mon[58955]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:43 vm05 ceph-mon[51512]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:43 vm09 ceph-mon[53367]: pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:56:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:56:45.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:45 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:56:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:45 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:56:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:47 vm05 ceph-mon[58955]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:47 vm05 ceph-mon[51512]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:47 vm09 ceph-mon[53367]: pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:49.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:49 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:49 vm05 ceph-mon[58955]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:49 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:49 vm05 ceph-mon[51512]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:49.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:49 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:49 vm09 ceph-mon[53367]: pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:49 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:56:51.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:51 vm05 ceph-mon[58955]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:51.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:51 vm05 ceph-mon[51512]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:51 vm09 ceph-mon[53367]: pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:53 vm05 ceph-mon[58955]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:53 vm05 ceph-mon[51512]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:53 vm09 ceph-mon[53367]: pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:56:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:55 vm05 ceph-mon[58955]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:55.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:55 vm05 ceph-mon[51512]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:55 vm09 ceph-mon[53367]: pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:56:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:57 vm05 ceph-mon[58955]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:57 vm05 ceph-mon[51512]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:57 vm09 ceph-mon[53367]: pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:59.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:56:59 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:56:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:59 vm05 ceph-mon[58955]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:56:59 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:59 vm05 ceph-mon[51512]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:56:59 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:56:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:59 vm09 ceph-mon[53367]: pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:56:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:56:59 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:56:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:56:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:01 vm05 ceph-mon[58955]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:01 vm05 ceph-mon[51512]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:01 vm09 ceph-mon[53367]: pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:03 vm05 ceph-mon[58955]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:03 vm05 ceph-mon[51512]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:03 vm09 ceph-mon[53367]: pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:05 vm05 ceph-mon[58955]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:05 vm05 ceph-mon[51512]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:05 vm09 ceph-mon[53367]: pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:07 vm05 ceph-mon[58955]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:07 vm05 ceph-mon[51512]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:07 vm09 ceph-mon[53367]: pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:08.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:08.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:09.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:09 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:57:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:09 vm05 ceph-mon[58955]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:09 vm05 ceph-mon[51512]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:09 vm09 ceph-mon[53367]: pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:10 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:10 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:10 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:11 vm09 ceph-mon[53367]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:11 vm05 ceph-mon[58955]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:12.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:11 vm05 ceph-mon[51512]: pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:13.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:13 vm09 ceph-mon[53367]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:13 vm05 ceph-mon[58955]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:13 vm05 ceph-mon[51512]: pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:15.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:15 vm09 ceph-mon[53367]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:15 vm05 ceph-mon[58955]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:15 vm05 ceph-mon[51512]: pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:17.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:17 vm09 ceph-mon[53367]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:17 vm05 ceph-mon[58955]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:18.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:17 vm05 ceph-mon[51512]: pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:19.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:19 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:57:19.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:19 vm09 ceph-mon[53367]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:19.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:19 vm05 ceph-mon[58955]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:19 vm05 ceph-mon[51512]: pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:20.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:20 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:20 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:21.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:20 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:21.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:21 vm09 ceph-mon[53367]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:21 vm05 ceph-mon[58955]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:21 vm05 ceph-mon[51512]: pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:23 vm09 ceph-mon[53367]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:23 vm05 ceph-mon[58955]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:23 vm05 ceph-mon[51512]: pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:25 vm09 ceph-mon[53367]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:25 vm05 ceph-mon[58955]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:25 vm05 ceph-mon[51512]: pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:27 vm05 ceph-mon[58955]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:27 vm05 ceph-mon[51512]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:27 vm09 ceph-mon[53367]: pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:29.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:57:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:29 vm05 ceph-mon[58955]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:29 vm05 ceph-mon[51512]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:29 vm09 ceph-mon[53367]: pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:30 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:30 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:30 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:31 vm05 ceph-mon[51512]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:31 vm05 ceph-mon[58955]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:31 vm09 ceph-mon[53367]: pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:33 vm05 ceph-mon[58955]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:33 vm05 ceph-mon[51512]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:33 vm09 ceph-mon[53367]: pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:35 vm05 ceph-mon[58955]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:35 vm05 ceph-mon[51512]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:36.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:35 vm09 ceph-mon[53367]: pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:37 vm05 ceph-mon[51512]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:38.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:37 vm05 ceph-mon[58955]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:38.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:37 vm09 ceph-mon[53367]: pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:39.085 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:39.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:39 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:57:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:39 vm05 ceph-mon[58955]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:40.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:39 vm05 ceph-mon[51512]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:40.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:39 vm09 ceph-mon[53367]: pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:40 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:40 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:41.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:40 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:41 vm05 ceph-mon[58955]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:42.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:41 vm05 ceph-mon[51512]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:41 vm09 ceph-mon[53367]: pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:43 vm05 ceph-mon[58955]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:43 vm05 ceph-mon[51512]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:43 vm09 ceph-mon[53367]: pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:45 vm05 ceph-mon[58955]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:45 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:57:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:45 vm05 ceph-mon[51512]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:45 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:57:46.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:45 vm09 ceph-mon[53367]: pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:45 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:57:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:57:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:57:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:57:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:57:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:57:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:46 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:57:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:57:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:57:47.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:46 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:57:48.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:47 vm09 ceph-mon[53367]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:47 vm05 ceph-mon[58955]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:47 vm05 ceph-mon[51512]: pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:49.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:49 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:57:50.133 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:49 vm05 ceph-mon[51512]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:50.133 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:49 vm05 ceph-mon[58955]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:50.133 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:57:50.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:49 vm09 ceph-mon[53367]: pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:51.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:50 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:50 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:50 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:57:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:51 vm09 ceph-mon[53367]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:51 vm05 ceph-mon[58955]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:51 vm05 ceph-mon[51512]: pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:53 vm09 ceph-mon[53367]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:54.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:53 vm05 ceph-mon[58955]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:53 vm05 ceph-mon[51512]: pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:57:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:55 vm09 ceph-mon[53367]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:55 vm05 ceph-mon[58955]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:55 vm05 ceph-mon[51512]: pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:57:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:57 vm09 ceph-mon[53367]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:57 vm05 ceph-mon[58955]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:57 vm05 ceph-mon[51512]: pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:57:59.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:57:59 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:57:59 vm09 ceph-mon[53367]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s 2026-03-10T13:58:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:57:59 vm05 ceph-mon[58955]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s 2026-03-10T13:58:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:57:59 vm05 ceph-mon[51512]: pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s 2026-03-10T13:58:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:57:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:57:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:00 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:00 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:01.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:00 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:01 vm09 ceph-mon[53367]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:01 vm05 ceph-mon[58955]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:02.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:01 vm05 ceph-mon[51512]: pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:03 vm05 ceph-mon[58955]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:03 vm05 ceph-mon[51512]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:03 vm09 ceph-mon[53367]: pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:06 vm05 ceph-mon[58955]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:06 vm05 ceph-mon[51512]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:06 vm09 ceph-mon[53367]: pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:08 vm05 ceph-mon[58955]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:08 vm05 ceph-mon[51512]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:08 vm09 ceph-mon[53367]: pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:09 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:09 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:09 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:09.423 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:09 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:10 vm05 ceph-mon[58955]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:10.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:10 vm05 ceph-mon[51512]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:10 vm09 ceph-mon[53367]: pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T13:58:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:11 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:11 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:11 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:12 vm05 ceph-mon[58955]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s 2026-03-10T13:58:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:12 vm05 ceph-mon[51512]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s 2026-03-10T13:58:12.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:12 vm09 ceph-mon[53367]: pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s 2026-03-10T13:58:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:14 vm05 ceph-mon[58955]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:14 vm05 ceph-mon[51512]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:14 vm09 ceph-mon[53367]: pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:15.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:15 vm09 ceph-mon[53367]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:15.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:15 vm05 ceph-mon[58955]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:15.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:15 vm05 ceph-mon[51512]: pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:17 vm05 ceph-mon[58955]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:17.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:17 vm05 ceph-mon[51512]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:17.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:17 vm09 ceph-mon[53367]: pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:19 vm09 ceph-mon[53367]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:19.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:19 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:19.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:19 vm05 ceph-mon[58955]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:19.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:19 vm05 ceph-mon[51512]: pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:20.330 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:20.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:20 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:20.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:20 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:20.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:20 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:21.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:21 vm09 ceph-mon[53367]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:21.674 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:58:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=cleanup t=2026-03-10T13:58:21.515887903Z level=info msg="Completed cleanup jobs" duration=1.485802ms 2026-03-10T13:58:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:21 vm05 ceph-mon[58955]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:21.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:21 vm05 ceph-mon[51512]: pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:22.174 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 13:58:21 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugins.update.checker t=2026-03-10T13:58:21.691748934Z level=info msg="Update check succeeded" duration=53.996054ms 2026-03-10T13:58:23.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:23 vm09 ceph-mon[53367]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:23.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:23 vm05 ceph-mon[58955]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:23 vm05 ceph-mon[51512]: pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:23.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:25.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:25 vm09 ceph-mon[53367]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:25 vm05 ceph-mon[58955]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:25 vm05 ceph-mon[51512]: pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:27.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:27 vm09 ceph-mon[53367]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:27.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:27 vm05 ceph-mon[58955]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:27.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:27 vm05 ceph-mon[51512]: pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:29.407 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:29 vm09 ceph-mon[53367]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:29.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:29 vm05 ceph-mon[58955]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:29.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:29 vm05 ceph-mon[51512]: pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:30 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:30 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:30.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:30 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:31 vm05 ceph-mon[58955]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:31 vm05 ceph-mon[51512]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:31 vm09 ceph-mon[53367]: pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:33 vm05 ceph-mon[58955]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:33 vm05 ceph-mon[51512]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:33.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:33 vm09 ceph-mon[53367]: pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:35 vm05 ceph-mon[58955]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:35 vm05 ceph-mon[51512]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:35 vm09 ceph-mon[53367]: pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:37 vm05 ceph-mon[58955]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:37 vm05 ceph-mon[51512]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:37 vm09 ceph-mon[53367]: pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:38.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:38.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:38.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:39.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:39 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:39 vm05 ceph-mon[58955]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:39.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:39 vm05 ceph-mon[51512]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:39 vm09 ceph-mon[53367]: pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:40 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:40.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:40 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:40.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:40 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:41 vm05 ceph-mon[58955]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:41 vm05 ceph-mon[51512]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:41 vm09 ceph-mon[53367]: pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:43.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:43 vm09 ceph-mon[53367]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:43 vm05 ceph-mon[58955]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:44.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:43 vm05 ceph-mon[51512]: pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:45 vm09 ceph-mon[53367]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:45 vm05 ceph-mon[51512]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:46.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:45 vm05 ceph-mon[58955]: pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:58:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:46 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:58:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:58:47.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:58:47.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:46 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:58:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:47 vm09 ceph-mon[53367]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:47 vm05 ceph-mon[58955]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:47 vm05 ceph-mon[51512]: pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:49.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:49 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:49 vm09 ceph-mon[53367]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:49 vm05 ceph-mon[58955]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:49.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:49 vm05 ceph-mon[51512]: pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:58:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:50 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:50 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:50 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:58:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:51 vm09 ceph-mon[53367]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:51 vm05 ceph-mon[58955]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:51 vm05 ceph-mon[51512]: pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:53 vm09 ceph-mon[53367]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:53 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:53 vm05 ceph-mon[58955]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:54.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:53 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:53 vm05 ceph-mon[51512]: pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:53 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:58:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:55 vm09 ceph-mon[53367]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:55 vm05 ceph-mon[58955]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:55 vm05 ceph-mon[51512]: pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:58:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:57 vm09 ceph-mon[53367]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:58.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:57 vm05 ceph-mon[58955]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:57 vm05 ceph-mon[51512]: pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:59.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:58:59 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:58:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:58:59 vm09 ceph-mon[53367]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:59.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:58:59 vm05 ceph-mon[58955]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:58:59.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:58:59 vm05 ceph-mon[51512]: pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:58:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:58:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:00.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:00 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:00 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:00 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:01 vm09 ceph-mon[53367]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:01 vm05 ceph-mon[58955]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:01 vm05 ceph-mon[51512]: pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:03 vm05 ceph-mon[58955]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:03 vm05 ceph-mon[51512]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:03 vm09 ceph-mon[53367]: pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:06.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:05 vm05 ceph-mon[58955]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:06.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:05 vm05 ceph-mon[51512]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:05 vm09 ceph-mon[53367]: pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:08.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:07 vm05 ceph-mon[58955]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:08.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:07 vm05 ceph-mon[51512]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:07 vm09 ceph-mon[53367]: pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:09.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:09.167 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:09.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:09 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:09 vm05 ceph-mon[58955]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:09 vm05 ceph-mon[51512]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:09 vm09 ceph-mon[53367]: pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:10 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:10 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:10 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:11 vm05 ceph-mon[58955]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:11 vm05 ceph-mon[51512]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:11 vm09 ceph-mon[53367]: pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:13 vm05 ceph-mon[58955]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:13 vm05 ceph-mon[51512]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:13 vm09 ceph-mon[53367]: pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:15 vm05 ceph-mon[58955]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:16.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:15 vm05 ceph-mon[51512]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:15 vm09 ceph-mon[53367]: pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:17 vm05 ceph-mon[58955]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:17 vm05 ceph-mon[51512]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:17 vm09 ceph-mon[53367]: pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:19.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:19 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:20.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:19 vm05 ceph-mon[58955]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:20.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:19 vm05 ceph-mon[51512]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:20.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:19 vm09 ceph-mon[53367]: pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:20 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:20 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:20 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:21 vm05 ceph-mon[58955]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:21 vm05 ceph-mon[51512]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:22.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:21 vm09 ceph-mon[53367]: pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:23 vm05 ceph-mon[58955]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:23 vm05 ceph-mon[51512]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:23 vm09 ceph-mon[53367]: pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:25 vm05 ceph-mon[58955]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:25 vm05 ceph-mon[51512]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:25 vm09 ceph-mon[53367]: pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:27 vm05 ceph-mon[58955]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:27 vm05 ceph-mon[51512]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:27 vm09 ceph-mon[53367]: pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:29.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:29 vm05 ceph-mon[58955]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:29 vm05 ceph-mon[51512]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:30.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:29 vm09 ceph-mon[53367]: pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:30 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:30 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:30 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:31 vm05 ceph-mon[58955]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:31 vm05 ceph-mon[51512]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:31 vm09 ceph-mon[53367]: pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:33 vm05 ceph-mon[58955]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:33 vm05 ceph-mon[51512]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:33 vm09 ceph-mon[53367]: pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:35 vm05 ceph-mon[58955]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:35 vm05 ceph-mon[51512]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:36.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:35 vm09 ceph-mon[53367]: pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:37 vm05 ceph-mon[58955]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:37 vm05 ceph-mon[51512]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:38.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:37 vm09 ceph-mon[53367]: pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:38 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:39.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:39 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:39 vm05 ceph-mon[58955]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:39 vm05 ceph-mon[51512]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:40.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:39 vm09 ceph-mon[53367]: pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:40 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:40 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:40 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:41 vm09 ceph-mon[53367]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:41 vm05 ceph-mon[58955]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:42.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:41 vm05 ceph-mon[51512]: pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:43 vm09 ceph-mon[53367]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:43 vm05 ceph-mon[58955]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:44.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:43 vm05 ceph-mon[51512]: pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:45 vm09 ceph-mon[53367]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:45 vm05 ceph-mon[51512]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:45 vm05 ceph-mon[58955]: pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:46.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:46 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:59:46.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:46 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:59:47.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:46 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:59:47.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:47 vm09 ceph-mon[53367]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:47.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:47 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:47.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:47 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:47.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:47 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:47.939 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:47 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[51512]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[58955]: pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:47 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:49.205 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:48 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:59:49.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:48 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:59:49.206 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:48 vm09 ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:59:49.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:59:49.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:49.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:59:49.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:59:49.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:48 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T13:59:49.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:49 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:50 vm05 ceph-mon[51512]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:50 vm05 ceph-mon[58955]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:50.233 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T13:59:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:50 vm09 ceph-mon[53367]: pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:51 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:51 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:51.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:51 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T13:59:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:52 vm05 ceph-mon[51512]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:52.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:52 vm05 ceph-mon[58955]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:52 vm09 ceph-mon[53367]: pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:54 vm05 ceph-mon[51512]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:54 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:54 vm05 ceph-mon[58955]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:54 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:54 vm09 ceph-mon[53367]: pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:54 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:59:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:56 vm05 ceph-mon[51512]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:56.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:56 vm05 ceph-mon[58955]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:56.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:56 vm09 ceph-mon[53367]: pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T13:59:58.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:58 vm09 ceph-mon[53367]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:58 vm05 ceph-mon[51512]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:58.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:58 vm05 ceph-mon[58955]: pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:59.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 13:59:59 vm09 ceph-mon[53367]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:59.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 13:59:59 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T13:59:59.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 13:59:59 vm05 ceph-mon[51512]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T13:59:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 13:59:59 vm05 ceph-mon[58955]: pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:00:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[58955]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:00 vm05 ceph-mon[51512]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:00:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 13:59:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:13:59:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: application not enabled on pool 'WatchNotifyvm05-92449-1' 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: application not enabled on pool 'AssertExistsvm05-92484-1' 2026-03-10T14:00:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:00 vm09 ceph-mon[53367]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:00:01.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:01 vm09 ceph-mon[53367]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:01 vm05 ceph-mon[58955]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:01 vm05 ceph-mon[51512]: pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:03.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:03 vm09 ceph-mon[53367]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:03.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:03 vm05 ceph-mon[58955]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:03.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:03 vm05 ceph-mon[51512]: pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:05 vm05 ceph-mon[58955]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:05.854 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:05 vm05 ceph-mon[51512]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:05 vm09 ceph-mon[53367]: pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:07 vm05 ceph-mon[58955]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:07 vm05 ceph-mon[51512]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:07.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:07 vm09 ceph-mon[53367]: pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:08.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:08 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:09.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:08 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:09.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:08 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:09.618 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:09 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:09 vm09 ceph-mon[53367]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:09.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:09 vm05 ceph-mon[58955]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:09 vm05 ceph-mon[51512]: pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:09 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:10 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:10 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:10 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:11.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:11 vm09 ceph-mon[53367]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:11 vm05 ceph-mon[58955]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:11 vm05 ceph-mon[51512]: pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:14.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:13 vm05 ceph-mon[58955]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:14.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:13 vm05 ceph-mon[51512]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:13 vm09 ceph-mon[53367]: pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:16.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:15 vm05 ceph-mon[51512]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:15 vm05 ceph-mon[58955]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:15 vm09 ceph-mon[53367]: pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:17 vm09 ceph-mon[53367]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:17 vm05 ceph-mon[51512]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:17 vm05 ceph-mon[58955]: pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:19.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:19 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:20.133 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:19 vm05 ceph-mon[58955]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:20.134 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:19 vm05 ceph-mon[51512]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:20.134 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:19 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:20.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:19 vm09 ceph-mon[53367]: pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:20 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:20 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:20 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:21 vm09 ceph-mon[53367]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:21 vm05 ceph-mon[58955]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:21 vm05 ceph-mon[51512]: pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:23 vm09 ceph-mon[53367]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:23 vm09 ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:23 vm05 ceph-mon[58955]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:23 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:23 vm05 ceph-mon[51512]: pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:23 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:26.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:25 vm05 ceph-mon[58955]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:26.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:25 vm05 ceph-mon[51512]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:26.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:25 vm09 ceph-mon[53367]: pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:27 vm05 ceph-mon[58955]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:27 vm05 ceph-mon[51512]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:28.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:27 vm09 ceph-mon[53367]: pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:29.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:29 vm09 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:29 vm05 ceph-mon[58955]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:29 vm05 ceph-mon[51512]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:29 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:29 vm09 ceph-mon[53367]: pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:00:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:30 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:30 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:30 vm09 ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:32.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:31 vm05 ceph-mon[58955]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:32.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:31 vm05 ceph-mon[51512]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:31 vm09 ceph-mon[53367]: pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:33 vm05 ceph-mon[58955]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:33 vm05 ceph-mon[51512]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:33 vm09 ceph-mon[53367]: pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:35 vm05 ceph-mon[58955]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:35 vm05 ceph-mon[51512]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:35 vm09 ceph-mon[53367]: pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:37 vm05 ceph-mon[58955]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:37 vm05 ceph-mon[51512]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:38.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:37 vm09.local ceph-mon[53367]: pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:39.248 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:38 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:38 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:39.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:39 vm05 ceph-mon[58955]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:39 vm05 ceph-mon[51512]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:39 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:39 vm09.local ceph-mon[53367]: pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:40 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:40 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:41 vm05 ceph-mon[58955]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:41 vm05 ceph-mon[51512]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:42.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:41 vm09.local ceph-mon[53367]: pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:44 vm05 ceph-mon[58955]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:44.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:44 vm05 ceph-mon[51512]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:44.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:44 vm09.local ceph-mon[53367]: pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:46 vm05 ceph-mon[58955]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:46 vm05 ceph-mon[51512]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:46 vm09.local ceph-mon[53367]: pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:48 vm05 ceph-mon[58955]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:48 vm05 ceph-mon[51512]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:48 vm09.local ceph-mon[53367]: pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:49.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:00:49.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:00:49.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:00:49.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:49 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:00:49.326 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:00:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:49 vm05 ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:00:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:50 vm05 ceph-mon[58955]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:50 vm05 ceph-mon[51512]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:50.233 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:49 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:00:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:50 vm09.local ceph-mon[53367]: pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:51 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:51.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:51 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:51.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:51 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:00:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:52 vm09.local ceph-mon[53367]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:52.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:52 vm05 ceph-mon[58955]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:52.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:52 vm05 ceph-mon[51512]: pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:54 vm09.local ceph-mon[53367]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:54 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:54.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:54 vm05 ceph-mon[58955]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:54.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:54 vm05 ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:54.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:54 vm05 ceph-mon[51512]: pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:54.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:54 vm05 ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:00:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:56 vm09.local ceph-mon[53367]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:56.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:56 vm05 ceph-mon[58955]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:56.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:56 vm05 ceph-mon[51512]: pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:00:58.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:58 vm09.local ceph-mon[53367]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:58 vm05 ceph-mon[58955]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:58 vm05 ceph-mon[51512]: pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:59.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:00:59 vm09.local ceph-mon[53367]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:59.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:00:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:00:59.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:00:59 vm05 ceph-mon[58955]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:00:59.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:00:59 vm05 ceph-mon[51512]: pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:00 vm05 ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:00 vm05 ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:00:59 vm05 ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:00:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:01.363 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:01 vm05 ceph-mon[58955]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:01.363 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:01 vm05 ceph-mon[51512]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:01 vm09.local ceph-mon[53367]: pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:03.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:03 vm09.local ceph-mon[53367]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:03 vm05 ceph-mon[58955]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:03 vm05 ceph-mon[51512]: pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:05.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:05 vm09.local ceph-mon[53367]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:05 vm05 ceph-mon[58955]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:05 vm05 ceph-mon[51512]: pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:07.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:07 vm09.local ceph-mon[53367]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:07 vm05 ceph-mon[58955]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:07 vm05 ceph-mon[51512]: pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:09.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:09 vm09.local ceph-mon[53367]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:09.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:09.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:01:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:09 vm05.local ceph-mon[58955]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:09 vm05.local ceph-mon[51512]: pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:10.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:11 vm09.local ceph-mon[53367]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:11 vm05.local ceph-mon[58955]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:11.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:11 vm05.local ceph-mon[51512]: pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:13 vm05.local ceph-mon[58955]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:13 vm05.local ceph-mon[51512]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:13 vm09.local ceph-mon[53367]: pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:15 vm09.local ceph-mon[53367]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:15 vm05.local ceph-mon[58955]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:15 vm05.local ceph-mon[51512]: pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:17 vm09.local ceph-mon[53367]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:17 vm05.local ceph-mon[58955]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:17 vm05.local ceph-mon[51512]: pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:19.614 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:01:19.922 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:19 vm09.local ceph-mon[53367]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:19.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:19 vm05.local ceph-mon[58955]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:19 vm05.local ceph-mon[51512]: pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:20.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:21 vm09.local ceph-mon[53367]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:21 vm05.local ceph-mon[58955]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:21 vm05.local ceph-mon[51512]: pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:23 vm05.local ceph-mon[58955]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:24.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:23 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:23 vm05.local ceph-mon[51512]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:23 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:23 vm09.local ceph-mon[53367]: pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:23 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:25 vm09.local ceph-mon[53367]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:26.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:25 vm05.local ceph-mon[58955]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:26.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:25 vm05.local ceph-mon[51512]: pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:28 vm05.local ceph-mon[58955]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:28 vm05.local ceph-mon[51512]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:28.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:28 vm09.local ceph-mon[53367]: pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:29.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:01:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:30 vm05.local ceph-mon[58955]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:30 vm05.local ceph-mon[51512]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:30.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:30 vm09.local ceph-mon[53367]: pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:31 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:31 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:31 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:32.257 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:32 vm05.local ceph-mon[58955]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:32.257 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:32 vm05.local ceph-mon[51512]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:32 vm09.local ceph-mon[53367]: pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:33.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:33 vm05.local ceph-mon[51512]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:33 vm05.local ceph-mon[58955]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:33.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:33 vm09.local ceph-mon[53367]: pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:01:35.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:35 vm09.local ceph-mon[53367]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:35 vm05.local ceph-mon[58955]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:35 vm05.local ceph-mon[51512]: pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:37.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:37 vm09.local ceph-mon[53367]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:37 vm05.local ceph-mon[58955]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:37 vm05.local ceph-mon[51512]: pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:39.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:01:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:39 vm09.local ceph-mon[53367]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:39 vm05.local ceph-mon[58955]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:39.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:39 vm05.local ceph-mon[51512]: pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:39.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:40.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:40.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:41.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:41 vm05.local ceph-mon[58955]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:41.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:41 vm05.local ceph-mon[51512]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:41 vm09.local ceph-mon[53367]: pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:43 vm05.local ceph-mon[58955]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:43.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:43 vm05.local ceph-mon[51512]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:43 vm09.local ceph-mon[53367]: pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:45 vm05.local ceph-mon[58955]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:45 vm05.local ceph-mon[51512]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:45 vm09.local ceph-mon[53367]: pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:47 vm05.local ceph-mon[58955]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:47 vm05.local ceph-mon[51512]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:47 vm09.local ceph-mon[53367]: pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:48.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:48 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:01:48.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:48 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:01:48.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:48 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:01:49.570 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:01:49.570 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:49 vm09.local ceph-mon[53367]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:49.570 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:01:49.570 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:01:49.570 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:49 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:01:49.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[58955]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[51512]: pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:01:49.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:01:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:01:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:51.061 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:51.061 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:01:52.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:51 vm05.local ceph-mon[58955]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:51 vm05.local ceph-mon[51512]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:51 vm09.local ceph-mon[53367]: pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:53 vm05.local ceph-mon[58955]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:53 vm05.local ceph-mon[51512]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:53 vm09.local ceph-mon[53367]: pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:01:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:55 vm05.local ceph-mon[58955]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:55 vm05.local ceph-mon[51512]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:55 vm09.local ceph-mon[53367]: pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:01:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:01:57 vm09.local ceph-mon[53367]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:01:57 vm05.local ceph-mon[51512]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:58.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:01:57 vm05.local ceph-mon[58955]: pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:01:59.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:01:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:00 vm05.local ceph-mon[58955]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:00.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:00 vm05.local ceph-mon[51512]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:01:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:01:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:00 vm09.local ceph-mon[53367]: pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:01 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:01 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:01 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:02 vm05.local ceph-mon[51512]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:02 vm05.local ceph-mon[58955]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:02.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:02 vm09.local ceph-mon[53367]: pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:04 vm05.local ceph-mon[51512]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:04 vm05.local ceph-mon[58955]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:04 vm09.local ceph-mon[53367]: pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:06 vm05.local ceph-mon[51512]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:06 vm05.local ceph-mon[58955]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:06 vm09.local ceph-mon[53367]: pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:08 vm05.local ceph-mon[51512]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:08 vm05.local ceph-mon[58955]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:08 vm09.local ceph-mon[53367]: pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:09.333 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:09.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:10.238 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:10 vm05.local ceph-mon[51512]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:10.238 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:10.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:10 vm05.local ceph-mon[58955]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:10 vm09.local ceph-mon[53367]: pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:11 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:11.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:11 vm05.local ceph-mon[51512]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:11.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:11 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:11.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:11 vm05.local ceph-mon[58955]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:11 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:11.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:11 vm09.local ceph-mon[53367]: pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:13.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:13 vm09.local ceph-mon[53367]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:13 vm05.local ceph-mon[51512]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:13 vm05.local ceph-mon[58955]: pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:15.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:15 vm09.local ceph-mon[53367]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:15 vm05.local ceph-mon[51512]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:15.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:15 vm05.local ceph-mon[58955]: pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:17 vm05.local ceph-mon[51512]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:17.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:17 vm05.local ceph-mon[58955]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:17 vm09.local ceph-mon[53367]: pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:19.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:19 vm09.local ceph-mon[53367]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:19.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:19 vm05.local ceph-mon[58955]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:19 vm05.local ceph-mon[51512]: pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:20.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:21 vm05.local ceph-mon[58955]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:21 vm05.local ceph-mon[51512]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:21 vm09.local ceph-mon[53367]: pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:23 vm05.local ceph-mon[58955]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:23 vm05.local ceph-mon[51512]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:23 vm09.local ceph-mon[53367]: pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:25.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:25 vm05.local ceph-mon[58955]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:25.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:25 vm05.local ceph-mon[51512]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:25 vm09.local ceph-mon[53367]: pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:27 vm05.local ceph-mon[58955]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:27.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:27 vm05.local ceph-mon[51512]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:27 vm09.local ceph-mon[53367]: pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:29.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:29 vm09.local ceph-mon[53367]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:29.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:29.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:29 vm05.local ceph-mon[58955]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:29.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:29 vm05.local ceph-mon[51512]: pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:31 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:31.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:31 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:31.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:31 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:32.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:32 vm05.local ceph-mon[58955]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:32.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:32 vm05.local ceph-mon[51512]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:32.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:32 vm09.local ceph-mon[53367]: pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:33.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:33 vm05.local ceph-mon[58955]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:33.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:33 vm05.local ceph-mon[51512]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:33.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:33 vm09.local ceph-mon[53367]: pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:35 vm05.local ceph-mon[58955]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:35 vm05.local ceph-mon[51512]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:35 vm09.local ceph-mon[53367]: pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:37.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:37 vm05.local ceph-mon[58955]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:37.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:37 vm05.local ceph-mon[51512]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:37 vm09.local ceph-mon[53367]: pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:38 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:38 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:39.673 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:40.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:39 vm09.local ceph-mon[53367]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:39 vm05.local ceph-mon[58955]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:39 vm05.local ceph-mon[51512]: pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:41 vm09.local ceph-mon[53367]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:41 vm05.local ceph-mon[58955]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:41 vm05.local ceph-mon[51512]: pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:43 vm09.local ceph-mon[53367]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:43 vm05.local ceph-mon[58955]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:44.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:43 vm05.local ceph-mon[51512]: pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:45 vm09.local ceph-mon[53367]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:45 vm05.local ceph-mon[58955]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:45 vm05.local ceph-mon[51512]: pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:47 vm05.local ceph-mon[58955]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:47 vm05.local ceph-mon[51512]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:47 vm09.local ceph-mon[53367]: pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:49.325 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:48 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:02:49.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:48 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:02:49.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:48 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:02:49.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[58955]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[51512]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:02:50.233 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:02:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:49 vm09.local ceph-mon[53367]: pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:02:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:02:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:49 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:02:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:51.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:51.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:02:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:51 vm05.local ceph-mon[58955]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:51 vm05.local ceph-mon[51512]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:51 vm09.local ceph-mon[53367]: pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:53 vm05.local ceph-mon[58955]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:53 vm05.local ceph-mon[51512]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:53 vm09.local ceph-mon[53367]: pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:02:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:55 vm05.local ceph-mon[58955]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:55 vm05.local ceph-mon[51512]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:56.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:55 vm09.local ceph-mon[53367]: pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:02:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:02:57 vm05.local ceph-mon[58955]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:02:57 vm05.local ceph-mon[51512]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:58.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:02:57 vm09.local ceph-mon[53367]: pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:02:59.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:02:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:00 vm05.local ceph-mon[58955]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:00 vm05.local ceph-mon[51512]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:02:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:02:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:00 vm09.local ceph-mon[53367]: pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:01 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:01 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:01 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:02 vm05.local ceph-mon[58955]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:02.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:02 vm05.local ceph-mon[51512]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:02.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:02 vm09.local ceph-mon[53367]: pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:04 vm05.local ceph-mon[58955]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:04 vm05.local ceph-mon[51512]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:04.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:04 vm09.local ceph-mon[53367]: pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:06 vm05.local ceph-mon[58955]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:06 vm05.local ceph-mon[51512]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:06 vm09.local ceph-mon[53367]: pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:08 vm05.local ceph-mon[58955]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:08 vm05.local ceph-mon[51512]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:08 vm09.local ceph-mon[53367]: pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:09.385 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:09.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:10 vm05.local ceph-mon[58955]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:10 vm05.local ceph-mon[51512]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:10 vm09.local ceph-mon[53367]: pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:11 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:11 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:11.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:11 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:12 vm05.local ceph-mon[58955]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:12 vm05.local ceph-mon[51512]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:12 vm09.local ceph-mon[53367]: pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:14 vm05.local ceph-mon[58955]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:14 vm05.local ceph-mon[51512]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:14 vm09.local ceph-mon[53367]: pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:16 vm09.local ceph-mon[53367]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:16 vm05.local ceph-mon[58955]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:16.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:16 vm05.local ceph-mon[51512]: pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:18 vm09.local ceph-mon[53367]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:18.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:18 vm05.local ceph-mon[58955]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:18.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:18 vm05.local ceph-mon[51512]: pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:19.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:20 vm05.local ceph-mon[51512]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:20 vm05.local ceph-mon[58955]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:20 vm09.local ceph-mon[53367]: pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:21.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:21 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:21 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:21 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:22 vm09.local ceph-mon[53367]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:22.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:22 vm05.local ceph-mon[51512]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:22.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:22 vm05.local ceph-mon[58955]: pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:24 vm09.local ceph-mon[53367]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:24 vm05.local ceph-mon[51512]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:24 vm05.local ceph-mon[58955]: pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:25.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:25 vm09.local ceph-mon[53367]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:25 vm05.local ceph-mon[51512]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:25 vm05.local ceph-mon[58955]: pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:27 vm05.local ceph-mon[51512]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:27 vm05.local ceph-mon[58955]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:27 vm09.local ceph-mon[53367]: pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:29.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:29 vm09.local ceph-mon[53367]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:29.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:29 vm05.local ceph-mon[51512]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:29 vm05.local ceph-mon[58955]: pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:30.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:30.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:31 vm05.local ceph-mon[51512]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:31 vm05.local ceph-mon[58955]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:31.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:31 vm09.local ceph-mon[53367]: pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:33.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:33 vm05.local ceph-mon[51512]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:33 vm05.local ceph-mon[58955]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:33.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:33 vm09.local ceph-mon[53367]: pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:35.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:35 vm05.local ceph-mon[51512]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:35.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:35 vm05.local ceph-mon[58955]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:35 vm09.local ceph-mon[53367]: pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:37 vm05.local ceph-mon[51512]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:37 vm05.local ceph-mon[58955]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:37 vm09.local ceph-mon[53367]: pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:38.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:38 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:38 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:39.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:39 vm09.local ceph-mon[53367]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:39.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:39 vm05.local ceph-mon[51512]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:39 vm05.local ceph-mon[58955]: pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:41.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:41 vm09.local ceph-mon[53367]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:42.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:41 vm05.local ceph-mon[51512]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:41 vm05.local ceph-mon[58955]: pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:43 vm09.local ceph-mon[53367]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:43 vm05.local ceph-mon[51512]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:43 vm05.local ceph-mon[58955]: pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:45 vm09.local ceph-mon[53367]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:45 vm05.local ceph-mon[51512]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:45 vm05.local ceph-mon[58955]: pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:47 vm09.local ceph-mon[53367]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:47 vm05.local ceph-mon[51512]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:47 vm05.local ceph-mon[58955]: pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:49.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-mon[53367]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:49.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:03:49.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:03:49.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:03:49.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:03:49.674 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[58955]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[51512]: pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:03:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:03:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:03:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:03:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:51 vm09.local ceph-mon[53367]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:51 vm05.local ceph-mon[58955]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:51 vm05.local ceph-mon[51512]: pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:53 vm09.local ceph-mon[53367]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:53 vm05.local ceph-mon[58955]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:53 vm05.local ceph-mon[51512]: pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:03:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:55 vm09.local ceph-mon[53367]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:55 vm05.local ceph-mon[58955]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:55 vm05.local ceph-mon[51512]: pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:03:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:57 vm09.local ceph-mon[53367]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:57 vm05.local ceph-mon[58955]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:58.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:57 vm05.local ceph-mon[51512]: pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:03:59 vm09.local ceph-mon[53367]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:59.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:03:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:03:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:03:59 vm05.local ceph-mon[58955]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:03:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:03:59 vm05.local ceph-mon[51512]: pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:03:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:03:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:01.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:01 vm09.local ceph-mon[53367]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:01 vm05.local ceph-mon[58955]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:01 vm05.local ceph-mon[51512]: pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:03 vm05.local ceph-mon[58955]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:03 vm05.local ceph-mon[51512]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:03 vm09.local ceph-mon[53367]: pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:06.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:05 vm05.local ceph-mon[58955]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:06.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:05 vm05.local ceph-mon[51512]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:05 vm09.local ceph-mon[53367]: pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:08.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:07 vm09.local ceph-mon[53367]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:07 vm05.local ceph-mon[58955]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:07 vm05.local ceph-mon[51512]: pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:08 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:08 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:08 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:04:09.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:09 vm05.local ceph-mon[58955]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:09.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:09 vm05.local ceph-mon[51512]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:09.985 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:09 vm09.local ceph-mon[53367]: pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:11 vm05.local ceph-mon[58955]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:11 vm05.local ceph-mon[51512]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:11 vm09.local ceph-mon[53367]: pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:14 vm05.local ceph-mon[58955]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:14 vm05.local ceph-mon[51512]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:14 vm09.local ceph-mon[53367]: pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:16 vm05.local ceph-mon[58955]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:16 vm05.local ceph-mon[51512]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:16 vm09.local ceph-mon[53367]: pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:18 vm05.local ceph-mon[58955]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:18 vm05.local ceph-mon[51512]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:18 vm09.local ceph-mon[53367]: pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:19.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:04:20.287 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:20 vm05.local ceph-mon[51512]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:20.287 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:20.287 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:20.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:20 vm05.local ceph-mon[58955]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:20.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:20.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:20 vm09.local ceph-mon[53367]: pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:20.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:21.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:21 vm05.local ceph-mon[51512]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:21.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:21 vm05.local ceph-mon[58955]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:21.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:21 vm09.local ceph-mon[53367]: pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:23 vm05.local ceph-mon[51512]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:23 vm05.local ceph-mon[58955]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:23 vm09.local ceph-mon[53367]: pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:24.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:25.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:25 vm05.local ceph-mon[51512]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:25.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:25 vm05.local ceph-mon[58955]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:25.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:25 vm09.local ceph-mon[53367]: pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:27 vm05.local ceph-mon[51512]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:27 vm05.local ceph-mon[58955]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:27 vm09.local ceph-mon[53367]: pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:29 vm05.local ceph-mon[51512]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:29 vm05.local ceph-mon[58955]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:29.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:29 vm09.local ceph-mon[53367]: pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:29.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:04:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:31.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:31 vm09.local ceph-mon[53367]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:31 vm05.local ceph-mon[58955]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:31 vm05.local ceph-mon[51512]: pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:33 vm09.local ceph-mon[53367]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:33 vm05.local ceph-mon[58955]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:33 vm05.local ceph-mon[51512]: pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:35 vm09.local ceph-mon[53367]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:35 vm05.local ceph-mon[58955]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:35 vm05.local ceph-mon[51512]: pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:37.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:37 vm09.local ceph-mon[53367]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:37 vm05.local ceph-mon[58955]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:37 vm05.local ceph-mon[51512]: pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:38.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:38 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:38 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:39 vm09.local ceph-mon[53367]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:39.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:04:39.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:39 vm05.local ceph-mon[58955]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:39.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:39 vm05.local ceph-mon[51512]: pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:41 vm05.local ceph-mon[58955]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:41 vm05.local ceph-mon[51512]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:41 vm09.local ceph-mon[53367]: pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:43 vm05.local ceph-mon[58955]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:43 vm05.local ceph-mon[51512]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:43 vm09.local ceph-mon[53367]: pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:45 vm05.local ceph-mon[58955]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:45 vm05.local ceph-mon[51512]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:46.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:45 vm09.local ceph-mon[53367]: pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:47 vm05.local ceph-mon[58955]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:47 vm05.local ceph-mon[51512]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:48.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:47 vm09.local ceph-mon[53367]: pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:49.739 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-mon[53367]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:49.739 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:04:49.739 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:04:49.739 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:04:49.739 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:04:49.739 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:04:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[58955]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[51512]: pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:04:50.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:04:51.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:04:52.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:51 vm09.local ceph-mon[53367]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:51 vm05.local ceph-mon[58955]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:51 vm05.local ceph-mon[51512]: pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:53 vm09.local ceph-mon[53367]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:54.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:53 vm05.local ceph-mon[58955]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:53 vm05.local ceph-mon[51512]: pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:04:56.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:55 vm09.local ceph-mon[53367]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:55 vm05.local ceph-mon[58955]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:56.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:55 vm05.local ceph-mon[51512]: pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:04:58.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:57 vm09.local ceph-mon[53367]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:57 vm05.local ceph-mon[58955]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:57 vm05.local ceph-mon[51512]: pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:04:59.871 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:04:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:04:59 vm09.local ceph-mon[53367]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:04:59 vm05.local ceph-mon[58955]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:04:59 vm05.local ceph-mon[51512]: pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:04:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:04:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:01.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:01 vm09.local ceph-mon[53367]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:01 vm05.local ceph-mon[58955]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:02.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:01 vm05.local ceph-mon[51512]: pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:04.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:03 vm09.local ceph-mon[53367]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:03 vm05.local ceph-mon[58955]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:04.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:03 vm05.local ceph-mon[51512]: pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:05 vm09.local ceph-mon[53367]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:05 vm05.local ceph-mon[58955]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:05 vm05.local ceph-mon[51512]: pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:07 vm05.local ceph-mon[58955]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:07 vm05.local ceph-mon[51512]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:07 vm09.local ceph-mon[53367]: pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:08 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:09.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:08 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:08 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:09 vm05.local ceph-mon[58955]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:09 vm05.local ceph-mon[51512]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:10.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:10.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:09 vm09.local ceph-mon[53367]: pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:11.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:11.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:11 vm05.local ceph-mon[58955]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:11 vm05.local ceph-mon[51512]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:11 vm09.local ceph-mon[53367]: pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:13 vm05.local ceph-mon[58955]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:13 vm05.local ceph-mon[51512]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:13 vm09.local ceph-mon[53367]: pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:15 vm05.local ceph-mon[58955]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:16.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:15 vm05.local ceph-mon[51512]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:15 vm09.local ceph-mon[53367]: pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:17 vm05.local ceph-mon[58955]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:17 vm05.local ceph-mon[51512]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:17 vm09.local ceph-mon[53367]: pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:19.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:19 vm05.local ceph-mon[58955]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:19 vm05.local ceph-mon[51512]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:19 vm09.local ceph-mon[53367]: pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:21 vm05.local ceph-mon[58955]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:21 vm05.local ceph-mon[51512]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:21 vm09.local ceph-mon[53367]: pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:24 vm05.local ceph-mon[58955]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:24 vm05.local ceph-mon[51512]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:24.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:24 vm09.local ceph-mon[53367]: pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:26.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:26 vm05.local ceph-mon[58955]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:26.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:26 vm05.local ceph-mon[51512]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:26.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:26 vm09.local ceph-mon[53367]: pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:28 vm05.local ceph-mon[58955]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:28 vm05.local ceph-mon[51512]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:28.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:28 vm09.local ceph-mon[53367]: pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:29.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:30 vm05.local ceph-mon[58955]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:30 vm05.local ceph-mon[51512]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:30 vm09.local ceph-mon[53367]: pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:31 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:31 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:31 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:32 vm05.local ceph-mon[58955]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:32 vm05.local ceph-mon[51512]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:32 vm09.local ceph-mon[53367]: pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:34 vm05.local ceph-mon[58955]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:34 vm05.local ceph-mon[51512]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:34 vm09.local ceph-mon[53367]: pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:36 vm05.local ceph-mon[58955]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:36 vm05.local ceph-mon[51512]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:36 vm09.local ceph-mon[53367]: pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:38 vm05.local ceph-mon[58955]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:38 vm05.local ceph-mon[51512]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:38.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:38 vm09.local ceph-mon[53367]: pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:39.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:39.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:40 vm05.local ceph-mon[58955]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:40 vm05.local ceph-mon[51512]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:40.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:40 vm09.local ceph-mon[53367]: pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:41 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:41.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:41 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:41.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:41 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:42.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:42 vm09.local ceph-mon[53367]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:42 vm05.local ceph-mon[58955]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:42 vm05.local ceph-mon[51512]: pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:44.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:44 vm09.local ceph-mon[53367]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:44.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:44 vm05.local ceph-mon[58955]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:44.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:44 vm05.local ceph-mon[51512]: pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:46.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:46 vm09.local ceph-mon[53367]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:46.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:46 vm05.local ceph-mon[58955]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:46.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:46 vm05.local ceph-mon[51512]: pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:48 vm09.local ceph-mon[53367]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:48.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:48 vm05.local ceph-mon[58955]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:48.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:48 vm05.local ceph-mon[51512]: pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:49.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:05:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[58955]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:05:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:05:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[51512]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:05:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:05:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:50 vm09.local ceph-mon[53367]: pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:05:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:05:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:05:50.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:05:51.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:51 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:51.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:51 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:51.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:51 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:05:52.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:52 vm05.local ceph-mon[58955]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:52.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:52 vm05.local ceph-mon[51512]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:52.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:52 vm09.local ceph-mon[53367]: pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:53 vm09.local ceph-mon[53367]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:53.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:53 vm05.local ceph-mon[58955]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:53.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:53 vm05.local ceph-mon[51512]: pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:54.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:54 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:54 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:54.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:54 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:05:55.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:55 vm09.local ceph-mon[53367]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:55 vm05.local ceph-mon[58955]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:55 vm05.local ceph-mon[51512]: pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:05:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:57 vm05.local ceph-mon[58955]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:57.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:57 vm05.local ceph-mon[51512]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:57.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:57 vm09.local ceph-mon[53367]: pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:59.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:05:59 vm05.local ceph-mon[58955]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:59.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:05:59 vm05.local ceph-mon[51512]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:05:59 vm09.local ceph-mon[53367]: pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:05:59.923 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:05:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:05:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:05:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:00.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:00.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:01.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:01 vm05.local ceph-mon[58955]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:01.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:01 vm05.local ceph-mon[51512]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:01 vm09.local ceph-mon[53367]: pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:03 vm05.local ceph-mon[58955]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:03.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:03 vm05.local ceph-mon[51512]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:03 vm09.local ceph-mon[53367]: pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:05 vm05.local ceph-mon[58955]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:05 vm05.local ceph-mon[51512]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:05 vm09.local ceph-mon[53367]: pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:07 vm05.local ceph-mon[58955]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:07 vm05.local ceph-mon[51512]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:07 vm09.local ceph-mon[53367]: pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:09 vm05.local ceph-mon[58955]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:09.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:09 vm05.local ceph-mon[51512]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:09.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:09 vm09.local ceph-mon[53367]: pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:10.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:10.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:11.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:11 vm05.local ceph-mon[58955]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:11.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:11 vm05.local ceph-mon[51512]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:11.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:11 vm09.local ceph-mon[53367]: pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:13.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:13 vm05.local ceph-mon[58955]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:13.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:13 vm05.local ceph-mon[51512]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:13.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:13 vm09.local ceph-mon[53367]: pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:15 vm05.local ceph-mon[58955]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:15 vm05.local ceph-mon[51512]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:15 vm09.local ceph-mon[53367]: pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:17 vm09.local ceph-mon[53367]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:18.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:17 vm05.local ceph-mon[58955]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:18.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:17 vm05.local ceph-mon[51512]: pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:19.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:19 vm09.local ceph-mon[53367]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:19.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:19.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:19 vm05.local ceph-mon[58955]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:19 vm05.local ceph-mon[51512]: pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:21.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:21 vm05.local ceph-mon[58955]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:21 vm05.local ceph-mon[51512]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:21 vm09.local ceph-mon[53367]: pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:23 vm05.local ceph-mon[58955]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:23 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:23 vm05.local ceph-mon[51512]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:24.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:23 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:23 vm09.local ceph-mon[53367]: pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:24.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:23 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:26.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:25 vm05.local ceph-mon[58955]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:25 vm05.local ceph-mon[51512]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:25 vm09.local ceph-mon[53367]: pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:28.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:27 vm05.local ceph-mon[58955]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:27 vm05.local ceph-mon[51512]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:28.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:27 vm09.local ceph-mon[53367]: pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:29 vm09.local ceph-mon[53367]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:29.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:29.981 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:29 vm05.local ceph-mon[58955]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:29.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:29 vm05.local ceph-mon[51512]: pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:31.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:31 vm05.local ceph-mon[58955]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:31 vm05.local ceph-mon[51512]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:31 vm09.local ceph-mon[53367]: pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:33 vm05.local ceph-mon[58955]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:33 vm05.local ceph-mon[51512]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:34.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:33 vm09.local ceph-mon[53367]: pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:36.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:35 vm09.local ceph-mon[53367]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:35 vm05.local ceph-mon[58955]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:35 vm05.local ceph-mon[51512]: pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:38 vm05.local ceph-mon[58955]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:38 vm05.local ceph-mon[51512]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:38.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:38 vm09.local ceph-mon[53367]: pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:39.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:39.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:39.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:39.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:40 vm05.local ceph-mon[58955]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:40.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:40 vm05.local ceph-mon[51512]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:40.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:40 vm09.local ceph-mon[53367]: pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:41 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:41 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:41 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:42 vm05.local ceph-mon[58955]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:42.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:42 vm05.local ceph-mon[51512]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:42 vm09.local ceph-mon[53367]: pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:43.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:43 vm05.local ceph-mon[58955]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:43.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:43 vm05.local ceph-mon[51512]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:43.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:43 vm09.local ceph-mon[53367]: pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:45 vm05.local ceph-mon[58955]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:45 vm05.local ceph-mon[51512]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:45 vm09.local ceph-mon[53367]: pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:47 vm09.local ceph-mon[53367]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:47 vm05.local ceph-mon[58955]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:48.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:47 vm05.local ceph-mon[51512]: pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:49 vm09.local ceph-mon[53367]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:49.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:06:49.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:49 vm05.local ceph-mon[58955]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:49.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:49 vm05.local ceph-mon[51512]: pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:49.981 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:06:50.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:50 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:06:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:06:51.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:50 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:06:51.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:51 vm09.local ceph-mon[53367]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:51 vm05.local ceph-mon[58955]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:51 vm05.local ceph-mon[51512]: pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:53 vm09.local ceph-mon[53367]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:53.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:53 vm05.local ceph-mon[58955]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:53 vm05.local ceph-mon[51512]: pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:54.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:06:56.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:55 vm05.local ceph-mon[58955]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:55 vm05.local ceph-mon[51512]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:56.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:55 vm09.local ceph-mon[53367]: pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:06:58.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:57 vm09.local ceph-mon[53367]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:57 vm05.local ceph-mon[58955]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:57 vm05.local ceph-mon[51512]: pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:06:59.902 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:06:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:00.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:06:59 vm09.local ceph-mon[53367]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:06:59 vm05.local ceph-mon[58955]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:06:59 vm05.local ceph-mon[51512]: pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:06:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:06:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:01.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:01.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:01.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:02.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:01 vm05.local ceph-mon[58955]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:02.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:01 vm05.local ceph-mon[51512]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:02.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:01 vm09.local ceph-mon[53367]: pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:04.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:03 vm05.local ceph-mon[58955]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:04.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:03 vm05.local ceph-mon[51512]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:04.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:03 vm09.local ceph-mon[53367]: pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:06.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:06 vm05.local ceph-mon[58955]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:06.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:06 vm05.local ceph-mon[51512]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:06 vm09.local ceph-mon[53367]: pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:08.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:08 vm05.local ceph-mon[58955]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:08.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:08 vm05.local ceph-mon[51512]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:08.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:08 vm09.local ceph-mon[53367]: pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:09.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:09.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:10 vm05.local ceph-mon[58955]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:10 vm05.local ceph-mon[51512]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:10.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:10 vm09.local ceph-mon[53367]: pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:11 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:11 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:11 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:12 vm09.local ceph-mon[53367]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:12.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:12 vm05.local ceph-mon[58955]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:12.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:12 vm05.local ceph-mon[51512]: pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:14 vm09.local ceph-mon[53367]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:14 vm05.local ceph-mon[51512]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:14 vm05.local ceph-mon[58955]: pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:16.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:16 vm09.local ceph-mon[53367]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:16 vm05.local ceph-mon[51512]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:16.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:16 vm05.local ceph-mon[58955]: pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:17 vm05.local ceph-mon[51512]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:17.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:17 vm05.local ceph-mon[58955]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:17 vm09.local ceph-mon[53367]: pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:19.726 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:19 vm09.local ceph-mon[53367]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:19.726 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:19 vm05.local ceph-mon[58955]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:19 vm05.local ceph-mon[51512]: pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:20.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:21.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:21 vm05.local ceph-mon[58955]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:21.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:21 vm05.local ceph-mon[51512]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:21.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:21 vm09.local ceph-mon[53367]: pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:23 vm05.local ceph-mon[58955]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:23 vm05.local ceph-mon[51512]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:23.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:23 vm09.local ceph-mon[53367]: pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:24.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:25.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:25.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:25.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:25 vm09.local ceph-mon[53367]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:25 vm05.local ceph-mon[58955]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:26.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:25 vm05.local ceph-mon[51512]: pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:27.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:27 vm09.local ceph-mon[53367]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:27 vm05.local ceph-mon[58955]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:28.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:27 vm05.local ceph-mon[51512]: pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:29.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:29 vm09.local ceph-mon[53367]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:29.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:29.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:29 vm05.local ceph-mon[58955]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:29.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:29 vm05.local ceph-mon[51512]: pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:30.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:31.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:31 vm05.local ceph-mon[58955]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:32.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:31 vm05.local ceph-mon[51512]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:32.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:31 vm09.local ceph-mon[53367]: pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:33 vm05.local ceph-mon[58955]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:33 vm05.local ceph-mon[51512]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:34.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:33 vm09.local ceph-mon[53367]: pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:35 vm05.local ceph-mon[58955]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:35 vm05.local ceph-mon[51512]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:36.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:35 vm09.local ceph-mon[53367]: pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:37 vm05.local ceph-mon[58955]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:37 vm05.local ceph-mon[51512]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:38.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:37 vm09.local ceph-mon[53367]: pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:38 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:39.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:38 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:39 vm09.local ceph-mon[53367]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:39.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:40.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:39 vm05.local ceph-mon[58955]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:40.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:39 vm05.local ceph-mon[51512]: pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:40.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:41.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:41 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:41.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:41 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:41.610 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:41 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:42 vm05.local ceph-mon[58955]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:42.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:42 vm05.local ceph-mon[51512]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:42.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:42 vm09.local ceph-mon[53367]: pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:43.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:43 vm05.local ceph-mon[58955]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:43.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:43 vm05.local ceph-mon[51512]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:43.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:43 vm09.local ceph-mon[53367]: pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:45.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:45 vm09.local ceph-mon[53367]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:46.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:45 vm05.local ceph-mon[58955]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:46.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:45 vm05.local ceph-mon[51512]: pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:48.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:47 vm05.local ceph-mon[58955]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:48.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:47 vm05.local ceph-mon[51512]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:48.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:47 vm09.local ceph-mon[53367]: pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:49.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:49 vm09.local ceph-mon[53367]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:49.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:50.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:49 vm05.local ceph-mon[58955]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:50.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:49 vm05.local ceph-mon[51512]: pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:50.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:07:51.028 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:51.028 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:50 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:07:51.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:51.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:50 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:07:51.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:07:51.035 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:50 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:07:52.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:51 vm05.local ceph-mon[51512]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:52.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:51 vm05.local ceph-mon[58955]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:52.133 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:51 vm09.local ceph-mon[53367]: pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:07:53.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:07:53.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:07:54.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:54 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:54.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:54 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:54.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:54 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:07:55.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:55 vm05.local ceph-mon[58955]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:55.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:55 vm05.local ceph-mon[51512]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:55.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:55 vm09.local ceph-mon[53367]: pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:07:57.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:57 vm05.local ceph-mon[58955]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:57.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:57 vm05.local ceph-mon[51512]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:57 vm09.local ceph-mon[53367]: pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:07:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:07:59 vm09.local ceph-mon[53367]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 28 op/s 2026-03-10T14:07:59.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:07:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:07:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:07:59 vm05.local ceph-mon[58955]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 28 op/s 2026-03-10T14:07:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:07:59 vm05.local ceph-mon[51512]: pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 28 op/s 2026-03-10T14:08:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:07:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:07:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:00.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:01.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:01.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:01 vm09.local ceph-mon[53367]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:02.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:01 vm05.local ceph-mon[51512]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:02.088 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:01 vm05.local ceph-mon[58955]: pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:03 vm05.local ceph-mon[58955]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:04.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:03 vm05.local ceph-mon[51512]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:03 vm09.local ceph-mon[53367]: pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:06.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:05 vm05.local ceph-mon[58955]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:06.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:05 vm05.local ceph-mon[51512]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:06.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:05 vm09.local ceph-mon[53367]: pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:08.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:07 vm05.local ceph-mon[58955]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:08.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:07 vm05.local ceph-mon[51512]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:08.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:07 vm09.local ceph-mon[53367]: pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:09.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:08 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:09.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:08 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:09.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:08 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:09 vm09.local ceph-mon[53367]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:08:10.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:09 vm05.local ceph-mon[58955]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:10.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:09 vm05.local ceph-mon[51512]: pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T14:08:10.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:11.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:11.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:11 vm05.local ceph-mon[58955]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s 2026-03-10T14:08:12.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:11 vm05.local ceph-mon[51512]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s 2026-03-10T14:08:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:11 vm09.local ceph-mon[53367]: pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s 2026-03-10T14:08:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:13 vm05.local ceph-mon[58955]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:14.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:13 vm05.local ceph-mon[51512]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:14.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:13 vm09.local ceph-mon[53367]: pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:16.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:15 vm05.local ceph-mon[58955]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:16.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:15 vm05.local ceph-mon[51512]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:16.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:15 vm09.local ceph-mon[53367]: pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:17 vm05.local ceph-mon[58955]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:17 vm05.local ceph-mon[51512]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:18.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:17 vm09.local ceph-mon[53367]: pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:19.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:08:20.244 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:20 vm09.local ceph-mon[53367]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:20 vm05.local ceph-mon[58955]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:20 vm05.local ceph-mon[51512]: pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:20.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:21.145 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:21 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:21.412 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:21 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:21.412 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:21 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:21.924 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:08:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=cleanup t=2026-03-10T14:08:21.51647671Z level=info msg="Completed cleanup jobs" duration=1.662362ms 2026-03-10T14:08:21.924 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:08:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=plugins.update.checker t=2026-03-10T14:08:21.696022875Z level=info msg="Update check succeeded" duration=58.459692ms 2026-03-10T14:08:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:22 vm05.local ceph-mon[58955]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:22 vm05.local ceph-mon[51512]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:22.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:22 vm09.local ceph-mon[53367]: pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:23.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:23 vm05.local ceph-mon[58955]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:23.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:23 vm05.local ceph-mon[51512]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:23.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:23 vm09.local ceph-mon[53367]: pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:24.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:24.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:24.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:25.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:25 vm05.local ceph-mon[58955]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:25.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:25 vm05.local ceph-mon[51512]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:25.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:25 vm09.local ceph-mon[53367]: pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:27 vm05.local ceph-mon[58955]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:27 vm05.local ceph-mon[51512]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:27 vm09.local ceph-mon[53367]: pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:29 vm05.local ceph-mon[58955]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:29 vm05.local ceph-mon[51512]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:29.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:29 vm09.local ceph-mon[53367]: pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:29.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:08:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:30.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:30.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:31 vm05.local ceph-mon[58955]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:31.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:31 vm05.local ceph-mon[51512]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:31.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:31 vm09.local ceph-mon[53367]: pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:33.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:33 vm09.local ceph-mon[53367]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:33 vm05.local ceph-mon[58955]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:33 vm05.local ceph-mon[51512]: pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:35.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:35 vm09.local ceph-mon[53367]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:36.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:35 vm05.local ceph-mon[58955]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:35 vm05.local ceph-mon[51512]: pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:37 vm05.local ceph-mon[58955]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:38.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:37 vm05.local ceph-mon[51512]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:38.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:37 vm09.local ceph-mon[53367]: pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:38 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:38 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:39.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:38 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:39 vm09.local ceph-mon[53367]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:39.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:08:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:39 vm05.local ceph-mon[58955]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:39 vm05.local ceph-mon[51512]: pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:41.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:41.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:41 vm05.local ceph-mon[58955]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:41 vm05.local ceph-mon[51512]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:41 vm09.local ceph-mon[53367]: pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:44.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:43 vm05.local ceph-mon[58955]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:44.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:43 vm05.local ceph-mon[51512]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:43 vm09.local ceph-mon[53367]: pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:45 vm09.local ceph-mon[53367]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:45 vm05.local ceph-mon[58955]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:45 vm05.local ceph-mon[51512]: pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:47 vm05.local ceph-mon[58955]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:47 vm05.local ceph-mon[51512]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:48.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:47 vm09.local ceph-mon[53367]: pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:49.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:08:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:49 vm05.local ceph-mon[58955]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:50.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:49 vm05.local ceph-mon[51512]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:08:50.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:49 vm09.local ceph-mon[53367]: pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:51.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:51 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:51.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:51 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:51.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:51 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:08:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:52 vm05.local ceph-mon[58955]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:52 vm05.local ceph-mon[51512]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:52 vm09.local ceph-mon[53367]: pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:08:53.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:08:53.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:08:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[58955]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[51512]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:54 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:54 vm09.local ceph-mon[53367]: pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:08:54.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:54 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:08:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:54 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:08:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:56 vm05.local ceph-mon[58955]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:56 vm05.local ceph-mon[51512]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:56 vm09.local ceph-mon[53367]: pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:08:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:08:58 vm05.local ceph-mon[58955]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:08:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:08:58 vm05.local ceph-mon[51512]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:08:58.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:08:58 vm09.local ceph-mon[53367]: pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:08:59.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:08:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:00 vm05.local ceph-mon[58955]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:00 vm05.local ceph-mon[51512]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:08:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:08:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:00.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:00 vm09.local ceph-mon[53367]: pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:01.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:01 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:01 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:01 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:02.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:02 vm09.local ceph-mon[53367]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:02 vm05.local ceph-mon[58955]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:02.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:02 vm05.local ceph-mon[51512]: pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:04.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:04 vm09.local ceph-mon[53367]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:04.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:04 vm05.local ceph-mon[58955]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:04.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:04 vm05.local ceph-mon[51512]: pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T14:09:06.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:06 vm05.local ceph-mon[58955]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:06.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:06 vm05.local ceph-mon[51512]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:06.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:06 vm09.local ceph-mon[53367]: pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:07 vm05.local ceph-mon[58955]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:07 vm05.local ceph-mon[51512]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:07 vm09.local ceph-mon[53367]: pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:09 vm09.local ceph-mon[53367]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:09.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:09 vm05.local ceph-mon[58955]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:09.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:09.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:09 vm05.local ceph-mon[51512]: pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:09.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:11.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:11 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:11.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:11 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:11.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:11 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:12.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:12 vm05.local ceph-mon[58955]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:12.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:12 vm05.local ceph-mon[51512]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:12.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:12 vm09.local ceph-mon[53367]: pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:14.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:14 vm05.local ceph-mon[58955]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:14.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:14 vm05.local ceph-mon[51512]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:14.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:14 vm09.local ceph-mon[53367]: pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:16.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:16 vm05.local ceph-mon[58955]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:16.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:16 vm05.local ceph-mon[51512]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:16.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:16 vm09.local ceph-mon[53367]: pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:18.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:17 vm09.local ceph-mon[53367]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:17 vm05.local ceph-mon[58955]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:17 vm05.local ceph-mon[51512]: pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:19.976 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:19 vm09.local ceph-mon[53367]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:19.976 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:19 vm05.local ceph-mon[58955]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:19 vm05.local ceph-mon[51512]: pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:21.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:21.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:21.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:21 vm05.local ceph-mon[51512]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:22.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:21 vm05.local ceph-mon[51512]: osdmap e735: 8 total, 8 up, 8 in 2026-03-10T14:09:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:21 vm05.local ceph-mon[58955]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:22.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:21 vm05.local ceph-mon[58955]: osdmap e735: 8 total, 8 up, 8 in 2026-03-10T14:09:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:21 vm09.local ceph-mon[53367]: pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:09:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:21 vm09.local ceph-mon[53367]: osdmap e735: 8 total, 8 up, 8 in 2026-03-10T14:09:23.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:22 vm05.local ceph-mon[51512]: osdmap e736: 8 total, 8 up, 8 in 2026-03-10T14:09:23.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:22 vm05.local ceph-mon[58955]: osdmap e736: 8 total, 8 up, 8 in 2026-03-10T14:09:23.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:22 vm09.local ceph-mon[53367]: osdmap e736: 8 total, 8 up, 8 in 2026-03-10T14:09:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[51512]: pgmap v1618: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:09:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[51512]: osdmap e737: 8 total, 8 up, 8 in 2026-03-10T14:09:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[51512]: osdmap e738: 8 total, 8 up, 8 in 2026-03-10T14:09:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[58955]: pgmap v1618: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:09:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[58955]: osdmap e737: 8 total, 8 up, 8 in 2026-03-10T14:09:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:24.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:23 vm05.local ceph-mon[58955]: osdmap e738: 8 total, 8 up, 8 in 2026-03-10T14:09:24.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:23 vm09.local ceph-mon[53367]: pgmap v1618: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:09:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:23 vm09.local ceph-mon[53367]: osdmap e737: 8 total, 8 up, 8 in 2026-03-10T14:09:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:23 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:24.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:23 vm09.local ceph-mon[53367]: osdmap e738: 8 total, 8 up, 8 in 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[58955]: pgmap v1621: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[58955]: osdmap e739: 8 total, 8 up, 8 in 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[58955]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[51512]: pgmap v1621: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[51512]: osdmap e739: 8 total, 8 up, 8 in 2026-03-10T14:09:26.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:25 vm05.local ceph-mon[51512]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:26.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:25 vm09.local ceph-mon[53367]: pgmap v1621: 196 pgs: 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:26.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:25 vm09.local ceph-mon[53367]: osdmap e739: 8 total, 8 up, 8 in 2026-03-10T14:09:26.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:25 vm09.local ceph-mon[53367]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:27.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:26 vm05.local ceph-mon[58955]: osdmap e740: 8 total, 8 up, 8 in 2026-03-10T14:09:27.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:26 vm05.local ceph-mon[51512]: osdmap e740: 8 total, 8 up, 8 in 2026-03-10T14:09:27.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:26 vm09.local ceph-mon[53367]: osdmap e740: 8 total, 8 up, 8 in 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-10T14:09:28.025 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: handle_notify cookie 94223157374576 notify_id 3152505995265 notifier_gid 14988 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (1801523 ms) 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Trying... 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: handle_notify cookie 94223170241696 notify_id 3165390897153 notifier_gid 50551 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Waiting for 3.000000000s 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Timed out. 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Flushing... 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: Flushed... 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (7091 ms) 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (1808614 ms total) 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (1808614 ms total) 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91369 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91369 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91776 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91776 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92182 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92182 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91937 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91937 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92258 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92258 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91711 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91711 2026-03-10T14:09:28.026 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91232 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91232 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92341 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92341 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91631 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91631 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=90961 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 90961 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91033 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91033 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91547 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91547 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91097 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91097 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91668 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91668 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=91686 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 91686 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92125 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92125 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ for t in "${!pids[@]}" 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=92384 2026-03-10T14:09:28.027 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 92384 2026-03-10T14:09:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:27 vm05.local ceph-mon[58955]: pgmap v1624: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:27 vm05.local ceph-mon[58955]: osdmap e741: 8 total, 8 up, 8 in 2026-03-10T14:09:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:27 vm05.local ceph-mon[51512]: pgmap v1624: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:27 vm05.local ceph-mon[51512]: osdmap e741: 8 total, 8 up, 8 in 2026-03-10T14:09:28.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:27 vm09.local ceph-mon[53367]: pgmap v1624: 228 pgs: 32 unknown, 196 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:28.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:27 vm09.local ceph-mon[53367]: osdmap e741: 8 total, 8 up, 8 in 2026-03-10T14:09:29.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:29 vm05.local ceph-mon[58955]: osdmap e742: 8 total, 8 up, 8 in 2026-03-10T14:09:29.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:29 vm05.local ceph-mon[51512]: osdmap e742: 8 total, 8 up, 8 in 2026-03-10T14:09:29.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:29 vm09.local ceph-mon[53367]: osdmap e742: 8 total, 8 up, 8 in 2026-03-10T14:09:30.036 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:30 vm05.local ceph-mon[58955]: pgmap v1627: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:30.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:30 vm05.local ceph-mon[58955]: osdmap e743: 8 total, 8 up, 8 in 2026-03-10T14:09:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:30 vm05.local ceph-mon[51512]: pgmap v1627: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:30.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:30 vm05.local ceph-mon[51512]: osdmap e743: 8 total, 8 up, 8 in 2026-03-10T14:09:30.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:30 vm09.local ceph-mon[53367]: pgmap v1627: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:30.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:30 vm09.local ceph-mon[53367]: osdmap e743: 8 total, 8 up, 8 in 2026-03-10T14:09:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:31 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:31.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:31 vm05.local ceph-mon[58955]: osdmap e744: 8 total, 8 up, 8 in 2026-03-10T14:09:31.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:31 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:31.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:31 vm05.local ceph-mon[51512]: osdmap e744: 8 total, 8 up, 8 in 2026-03-10T14:09:31.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:31 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:31.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:31 vm09.local ceph-mon[53367]: osdmap e744: 8 total, 8 up, 8 in 2026-03-10T14:09:32.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[58955]: pgmap v1630: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:32.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[58955]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:32.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[58955]: osdmap e745: 8 total, 8 up, 8 in 2026-03-10T14:09:32.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[51512]: pgmap v1630: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:32.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[51512]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:32.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:32 vm05.local ceph-mon[51512]: osdmap e745: 8 total, 8 up, 8 in 2026-03-10T14:09:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:32 vm09.local ceph-mon[53367]: pgmap v1630: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T14:09:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:32 vm09.local ceph-mon[53367]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:32.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:32 vm09.local ceph-mon[53367]: osdmap e745: 8 total, 8 up, 8 in 2026-03-10T14:09:33.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:33 vm09.local ceph-mon[53367]: pgmap v1632: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:33.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:33 vm09.local ceph-mon[53367]: osdmap e746: 8 total, 8 up, 8 in 2026-03-10T14:09:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:33 vm05.local ceph-mon[58955]: pgmap v1632: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:33 vm05.local ceph-mon[58955]: osdmap e746: 8 total, 8 up, 8 in 2026-03-10T14:09:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:33 vm05.local ceph-mon[51512]: pgmap v1632: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:34.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:33 vm05.local ceph-mon[51512]: osdmap e746: 8 total, 8 up, 8 in 2026-03-10T14:09:34.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:34 vm09.local ceph-mon[53367]: osdmap e747: 8 total, 8 up, 8 in 2026-03-10T14:09:34.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:34 vm09.local ceph-mon[53367]: osdmap e748: 8 total, 8 up, 8 in 2026-03-10T14:09:35.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:34 vm05.local ceph-mon[51512]: osdmap e747: 8 total, 8 up, 8 in 2026-03-10T14:09:35.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:34 vm05.local ceph-mon[51512]: osdmap e748: 8 total, 8 up, 8 in 2026-03-10T14:09:35.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:34 vm05.local ceph-mon[58955]: osdmap e747: 8 total, 8 up, 8 in 2026-03-10T14:09:35.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:34 vm05.local ceph-mon[58955]: osdmap e748: 8 total, 8 up, 8 in 2026-03-10T14:09:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:35 vm09.local ceph-mon[53367]: pgmap v1635: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:35 vm09.local ceph-mon[53367]: osdmap e749: 8 total, 8 up, 8 in 2026-03-10T14:09:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:35 vm05.local ceph-mon[51512]: pgmap v1635: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:36.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:35 vm05.local ceph-mon[51512]: osdmap e749: 8 total, 8 up, 8 in 2026-03-10T14:09:36.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:35 vm05.local ceph-mon[58955]: pgmap v1635: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:09:36.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:35 vm05.local ceph-mon[58955]: osdmap e749: 8 total, 8 up, 8 in 2026-03-10T14:09:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:37 vm09.local ceph-mon[53367]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:37 vm09.local ceph-mon[53367]: osdmap e750: 8 total, 8 up, 8 in 2026-03-10T14:09:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:37 vm09.local ceph-mon[53367]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[58955]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[58955]: osdmap e750: 8 total, 8 up, 8 in 2026-03-10T14:09:38.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[58955]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[51512]: pgmap v1638: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[51512]: osdmap e750: 8 total, 8 up, 8 in 2026-03-10T14:09:38.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:37 vm05.local ceph-mon[51512]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:38.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:38 vm09.local ceph-mon[53367]: osdmap e751: 8 total, 8 up, 8 in 2026-03-10T14:09:39.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:38 vm05.local ceph-mon[58955]: osdmap e751: 8 total, 8 up, 8 in 2026-03-10T14:09:39.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:38 vm05.local ceph-mon[51512]: osdmap e751: 8 total, 8 up, 8 in 2026-03-10T14:09:39.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:39 vm09.local ceph-mon[53367]: pgmap v1641: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:39 vm09.local ceph-mon[53367]: osdmap e752: 8 total, 8 up, 8 in 2026-03-10T14:09:39.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:39.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:39.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[58955]: pgmap v1641: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:39.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[58955]: osdmap e752: 8 total, 8 up, 8 in 2026-03-10T14:09:39.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[51512]: pgmap v1641: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[51512]: osdmap e752: 8 total, 8 up, 8 in 2026-03-10T14:09:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: Running main() from gmock_main.cc 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] Global test environment set-up. 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1801516 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3006 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3051 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (3036 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3552 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3013 ms) 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-10T14:09:40.631 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (4033 ms) 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1821207 ms total) 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [----------] Global test environment tear-down 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1821207 ms total) 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stderr:+ exit 0 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stderr:+ cleanup 2026-03-10T14:09:40.632 INFO:tasks.workunit.client.0.vm05.stderr:+ pkill -P 90955 2026-03-10T14:09:40.639 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-10T14:09:40.640 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T14:09:40.640 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T14:09:40.678 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-10T14:09:40.678 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-10T14:09:40.678 DEBUG:teuthology.orchestra.run.vm05:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-10T14:09:40.739 INFO:tasks.workunit.client.0.vm05.stderr:++ uuidgen 2026-03-10T14:09:40.741 INFO:tasks.workunit.client.0.vm05.stderr:+ p=29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:09:40.741 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool create 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 12 2026-03-10T14:09:40.808 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- 192.168.123.105:0/570093416 >> v1:192.168.123.105:6789/0 conn(0x7fab5810b6a0 legacy=0x7fab5810da90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:40.808 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- 192.168.123.105:0/570093416 shutdown_connections 2026-03-10T14:09:40.808 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- 192.168.123.105:0/570093416 >> 192.168.123.105:0/570093416 conn(0x7fab580fd270 msgr2=0x7fab580ff690 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:40.808 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- 192.168.123.105:0/570093416 shutdown_connections 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- 192.168.123.105:0/570093416 wait complete. 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 Processor -- start 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- start start 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.808+0000 7fab60098640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fab581ab9c0 con 0x7fab5810f1b0 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab60098640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fab581acbc0 con 0x7fab5810b6a0 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab60098640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fab581addc0 con 0x7fab581038a0 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab5e60e640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7fab5810f1b0 0x7fab581aa0c0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:57544/0 (socket says 192.168.123.105:57544) 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab5e60e640 1 -- 192.168.123.105:0/623780618 learned_addr learned my addr 192.168.123.105:0/623780618 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3923467901 0 0) 0x7fab581ab9c0 con 0x7fab5810f1b0 2026-03-10T14:09:40.809 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fab34003620 con 0x7fab5810f1b0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 501731993 0 0) 0x7fab581acbc0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fab581ab9c0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3959563481 0 0) 0x7fab581addc0 con 0x7fab581038a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.809+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fab581acbc0 con 0x7fab581038a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3075533225 0 0) 0x7fab581ab9c0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fab581addc0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fab4c0044a0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 57473828 0 0) 0x7fab34003620 con 0x7fab5810f1b0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fab581ab9c0 con 0x7fab5810f1b0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3358698190 0 0) 0x7fab581acbc0 con 0x7fab581038a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fab34003620 con 0x7fab581038a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fab540034c0 con 0x7fab5810f1b0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fab48003080 con 0x7fab581038a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2184526699 0 0) 0x7fab581addc0 con 0x7fab5810b6a0 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 >> v1:192.168.123.105:6790/0 conn(0x7fab581038a0 legacy=0x7fab58102ee0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:40.810 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 >> v1:192.168.123.105:6789/0 conn(0x7fab5810f1b0 legacy=0x7fab581aa0c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:40.811 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fab581aefc0 con 0x7fab5810b6a0 2026-03-10T14:09:40.811 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fab581acdf0 con 0x7fab5810b6a0 2026-03-10T14:09:40.811 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.810+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fab581ad3b0 con 0x7fab5810b6a0 2026-03-10T14:09:40.812 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.811+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fab4c003270 con 0x7fab5810b6a0 2026-03-10T14:09:40.812 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.811+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fab4c004b00 con 0x7fab5810b6a0 2026-03-10T14:09:40.813 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.812+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7fab4c004b00 con 0x7fab5810b6a0 2026-03-10T14:09:40.813 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.812+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(754..754 src has 1..754) ==== 7390+0+0 (unknown 3179340113 0 0) 0x7fab4c094270 con 0x7fab5810b6a0 2026-03-10T14:09:40.813 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.813+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fab20005180 con 0x7fab5810b6a0 2026-03-10T14:09:40.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.816+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fab4c060a40 con 0x7fab5810b6a0 2026-03-10T14:09:40.915 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:40.914+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12} v 0) -- 0x7fab20005470 con 0x7fab5810b6a0 2026-03-10T14:09:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:40 vm09.local ceph-mon[53367]: osdmap e753: 8 total, 8 up, 8 in 2026-03-10T14:09:40.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:40 vm05.local ceph-mon[58955]: osdmap e753: 8 total, 8 up, 8 in 2026-03-10T14:09:41.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:40 vm05.local ceph-mon[51512]: osdmap e753: 8 total, 8 up, 8 in 2026-03-10T14:09:41.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:41.694 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.694+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]=0 pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' created v755) ==== 176+0+0 (unknown 910495197 0 0) 0x7fab4c065980 con 0x7fab5810b6a0 2026-03-10T14:09:41.751 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.750+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12} v 0) -- 0x7fab20002980 con 0x7fab5810b6a0 2026-03-10T14:09:41.752 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.751+0000 7fab46ffd640 1 -- 192.168.123.105:0/623780618 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]=0 pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' already exists v755) ==== 183+0+0 (unknown 303244213 0 0) 0x7fab20002980 con 0x7fab5810b6a0 2026-03-10T14:09:41.752 INFO:tasks.workunit.client.0.vm05.stderr:pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' already exists 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 >> v1:192.168.123.105:6800/1010796596 conn(0x7fab34078580 legacy=0x7fab3407aa40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 >> v1:192.168.123.109:6789/0 conn(0x7fab5810b6a0 legacy=0x7fab581a68e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 shutdown_connections 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 >> 192.168.123.105:0/623780618 conn(0x7fab580fd270 msgr2=0x7fab580ff660 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 shutdown_connections 2026-03-10T14:09:41.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.754+0000 7fab60098640 1 -- 192.168.123.105:0/623780618 wait complete. 2026-03-10T14:09:41.762 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_objects 10 2026-03-10T14:09:41.816 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- 192.168.123.105:0/3841851534 >> v1:192.168.123.105:6790/0 conn(0x7fc8d0111380 legacy=0x7fc8d0113820 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:41.816 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- 192.168.123.105:0/3841851534 shutdown_connections 2026-03-10T14:09:41.816 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- 192.168.123.105:0/3841851534 >> 192.168.123.105:0/3841851534 conn(0x7fc8d01005c0 msgr2=0x7fc8d01029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:41.816 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- 192.168.123.105:0/3841851534 shutdown_connections 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- 192.168.123.105:0/3841851534 wait complete. 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 Processor -- start 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- start start 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc8d01ab760 con 0x7fc8d010a910 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc8d01ac960 con 0x7fc8d010d7b0 2026-03-10T14:09:41.817 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.816+0000 7fc8d6038640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fc8d01adb60 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8cffff640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fc8d0111380 0x7fc8d01a9e60 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:39494/0 (socket says 192.168.123.105:39494) 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8cffff640 1 -- 192.168.123.105:0/3998387151 learned_addr learned my addr 192.168.123.105:0/3998387151 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2712643398 0 0) 0x7fc8d01adb60 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc8a8003620 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1034320562 0 0) 0x7fc8d01ac960 con 0x7fc8d010d7b0 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc8d01adb60 con 0x7fc8d010d7b0 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 582953246 0 0) 0x7fc8a8003620 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc8d01ac960 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fc8c4004970 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1482853362 0 0) 0x7fc8d01adb60 con 0x7fc8d010d7b0 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.817+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fc8a8003620 con 0x7fc8d010d7b0 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 289052630 0 0) 0x7fc8d01ac960 con 0x7fc8d0111380 2026-03-10T14:09:41.818 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 >> v1:192.168.123.109:6789/0 conn(0x7fc8d010d7b0 legacy=0x7fc8d01a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:41.819 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 >> v1:192.168.123.105:6789/0 conn(0x7fc8d010a910 legacy=0x7fc8d0110aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:41.819 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc8d01aed60 con 0x7fc8d0111380 2026-03-10T14:09:41.819 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fc8c4003e90 con 0x7fc8d0111380 2026-03-10T14:09:41.819 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fc8c4005330 con 0x7fc8d0111380 2026-03-10T14:09:41.820 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.818+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fc8d01acb90 con 0x7fc8d0111380 2026-03-10T14:09:41.820 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.819+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7fc8d01ad150 con 0x7fc8d0111380 2026-03-10T14:09:41.821 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.820+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7fc8c4003350 con 0x7fc8d0111380 2026-03-10T14:09:41.821 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.820+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc894005180 con 0x7fc8d0111380 2026-03-10T14:09:41.824 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.820+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(755..755 src has 254..755) ==== 7765+0+0 (unknown 206173760 0 0) 0x7fc8c405a150 con 0x7fc8d0111380 2026-03-10T14:09:41.824 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.823+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fc8c4002d60 con 0x7fc8d0111380 2026-03-10T14:09:41.922 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:41.920+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"} v 0) -- 0x7fc894005470 con 0x7fc8d0111380 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[58955]: pgmap v1644: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[58955]: osdmap e754: 8 total, 8 up, 8 in 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[58955]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[51512]: pgmap v1644: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:42.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[51512]: osdmap e754: 8 total, 8 up, 8 in 2026-03-10T14:09:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:41 vm05.local ceph-mon[51512]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:41 vm09.local ceph-mon[53367]: pgmap v1644: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:41 vm09.local ceph-mon[53367]: osdmap e754: 8 total, 8 up, 8 in 2026-03-10T14:09:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:41 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:41 vm09.local ceph-mon[53367]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:42.698 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:42.698+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v756) ==== 223+0+0 (unknown 1573870666 0 0) 0x7fc8c40621f0 con 0x7fc8d0111380 2026-03-10T14:09:42.755 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:42.754+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"} v 0) -- 0x7fc894005d40 con 0x7fc8d0111380 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: from='client.50572 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]': finished 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: osdmap e755: 8 total, 8 up, 8 in 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[58955]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: from='client.50572 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]': finished 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: osdmap e755: 8 total, 8 up, 8 in 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:42 vm05.local ceph-mon[51512]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: from='client.50572 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]': finished 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: osdmap e755: 8 total, 8 up, 8 in 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/623780618' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: from='client.50572 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pg_num": 12}]: dispatch 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:42 vm09.local ceph-mon[53367]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:43.731 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.730+0000 7fc8ccff9640 1 -- 192.168.123.105:0/3998387151 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v757) ==== 223+0+0 (unknown 344652090 0 0) 0x7fc8c4067130 con 0x7fc8d0111380 2026-03-10T14:09:43.731 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 10 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 >> v1:192.168.123.105:6800/1010796596 conn(0x7fc8a8078050 legacy=0x7fc8a807a510 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 >> v1:192.168.123.105:6790/0 conn(0x7fc8d0111380 legacy=0x7fc8d01a9e60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 shutdown_connections 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 >> 192.168.123.105:0/3998387151 conn(0x7fc8d01005c0 msgr2=0x7fc8d0114960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 shutdown_connections 2026-03-10T14:09:43.733 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.733+0000 7fc8d6038640 1 -- 192.168.123.105:0/3998387151 wait complete. 2026-03-10T14:09:43.742 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool application enable 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e rados 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.794+0000 7f40689c1640 1 -- 192.168.123.105:0/2937909033 >> v1:192.168.123.105:6789/0 conn(0x7f406010f100 legacy=0x7f40601115a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- 192.168.123.105:0/2937909033 shutdown_connections 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- 192.168.123.105:0/2937909033 >> 192.168.123.105:0/2937909033 conn(0x7f40600fe360 msgr2=0x7f4060100780 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- 192.168.123.105:0/2937909033 shutdown_connections 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- 192.168.123.105:0/2937909033 wait complete. 2026-03-10T14:09:43.795 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 Processor -- start 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- start start 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f40601a14b0 con 0x7f406010b530 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f40601a16a0 con 0x7f406010f100 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f40689c1640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f40601b2300 con 0x7f4060108690 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f4066736640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f4060108690 0x7f406010e940 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:39508/0 (socket says 192.168.123.105:39508) 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f4066736640 1 -- 192.168.123.105:0/2513471240 learned_addr learned my addr 192.168.123.105:0/2513471240 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.795+0000 7f4065f35640 1 --1- 192.168.123.105:0/2513471240 >> v1:192.168.123.105:6789/0 conn(0x7f406010b530 0x7f40601a03a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:57578/0 (socket says 192.168.123.105:57578) 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1715847061 0 0) 0x7f40601a14b0 con 0x7f406010b530 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f402c003620 con 0x7f406010b530 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1283402014 0 0) 0x7f40601b2300 con 0x7f4060108690 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f40601a14b0 con 0x7f4060108690 2026-03-10T14:09:43.796 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3289075826 0 0) 0x7f40601a14b0 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f40601b2300 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2613989931 0 0) 0x7f40601a16a0 con 0x7f406010f100 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f40601a14b0 con 0x7f406010f100 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f404c0030c0 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2425407959 0 0) 0x7f40601b2300 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 >> v1:192.168.123.109:6789/0 conn(0x7f406010f100 legacy=0x7f40601a0b90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 >> v1:192.168.123.105:6789/0 conn(0x7f406010b530 legacy=0x7f40601a03a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f40601b3500 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f40601b2530 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f40601b2af0 con 0x7f4060108690 2026-03-10T14:09:43.797 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f404c003b60 con 0x7f4060108690 2026-03-10T14:09:43.798 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.796+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f404c005ba0 con 0x7f4060108690 2026-03-10T14:09:43.798 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.798+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4060103dc0 con 0x7f4060108690 2026-03-10T14:09:43.801 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.798+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f404c003710 con 0x7f4060108690 2026-03-10T14:09:43.801 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.799+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(757..757 src has 254..757) ==== 7765+0+0 (unknown 455624620 0 0) 0x7f404c0957d0 con 0x7f4060108690 2026-03-10T14:09:43.801 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.801+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f404c061e20 con 0x7f4060108690 2026-03-10T14:09:43.892 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:43.892+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"} v 0) -- 0x7f4060106610 con 0x7f4060108690 2026-03-10T14:09:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[51512]: pgmap v1647: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:09:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[51512]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[51512]: osdmap e756: 8 total, 8 up, 8 in 2026-03-10T14:09:44.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[51512]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[58955]: pgmap v1647: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[58955]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[58955]: osdmap e756: 8 total, 8 up, 8 in 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:43 vm05.local ceph-mon[58955]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:43 vm09.local ceph-mon[53367]: pgmap v1647: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:09:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:43 vm09.local ceph-mon[53367]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:43 vm09.local ceph-mon[53367]: osdmap e756: 8 total, 8 up, 8 in 2026-03-10T14:09:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:43 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3998387151' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:43 vm09.local ceph-mon[53367]: from='client.50042 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:09:44.758 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:44.757+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]=0 enabled application 'rados' on pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' v758) ==== 213+0+0 (unknown 17235892 0 0) 0x7f404c066d60 con 0x7f4060108690 2026-03-10T14:09:44.816 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:44.815+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"} v 0) -- 0x7f40600008d0 con 0x7f4060108690 2026-03-10T14:09:45.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[58955]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[58955]: osdmap e757: 8 total, 8 up, 8 in 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[58955]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[51512]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[51512]: osdmap e757: 8 total, 8 up, 8 in 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:44 vm05.local ceph-mon[51512]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:44 vm09.local ceph-mon[53367]: from='client.50042 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:09:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:44 vm09.local ceph-mon[53367]: osdmap e757: 8 total, 8 up, 8 in 2026-03-10T14:09:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:44 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:44 vm09.local ceph-mon[53367]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:45.893 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.892+0000 7f404b7fe640 1 -- 192.168.123.105:0/2513471240 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]=0 enabled application 'rados' on pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' v759) ==== 213+0+0 (unknown 992414034 0 0) 0x7f404c059d80 con 0x7f4060108690 2026-03-10T14:09:45.893 INFO:tasks.workunit.client.0.vm05.stderr:enabled application 'rados' on pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' 2026-03-10T14:09:45.895 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 >> v1:192.168.123.105:6800/1010796596 conn(0x7f402c078690 legacy=0x7f402c07ab50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:45.895 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 >> v1:192.168.123.105:6790/0 conn(0x7f4060108690 legacy=0x7f406010e940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:09:45.895 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 shutdown_connections 2026-03-10T14:09:45.895 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 >> 192.168.123.105:0/2513471240 conn(0x7f40600fe360 msgr2=0x7f406010aad0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:09:45.895 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 shutdown_connections 2026-03-10T14:09:45.896 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:09:45.895+0000 7f40689c1640 1 -- 192.168.123.105:0/2513471240 wait complete. 2026-03-10T14:09:45.903 INFO:tasks.workunit.client.0.vm05.stderr:++ seq 1 10 2026-03-10T14:09:45.905 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:45.905 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj1 /etc/passwd 2026-03-10T14:09:45.934 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:45.934 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj2 /etc/passwd 2026-03-10T14:09:45.963 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:45.963 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj3 /etc/passwd 2026-03-10T14:09:45.994 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:45.994 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj4 /etc/passwd 2026-03-10T14:09:46.021 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.021 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj5 /etc/passwd 2026-03-10T14:09:46.047 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.047 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj6 /etc/passwd 2026-03-10T14:09:46.072 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.072 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj7 /etc/passwd 2026-03-10T14:09:46.100 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.100 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj8 /etc/passwd 2026-03-10T14:09:46.129 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.129 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj9 /etc/passwd 2026-03-10T14:09:46.156 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:09:46.156 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put obj10 /etc/passwd 2026-03-10T14:09:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:45 vm09.local ceph-mon[53367]: pgmap v1650: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:45 vm09.local ceph-mon[53367]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:45 vm09.local ceph-mon[53367]: osdmap e758: 8 total, 8 up, 8 in 2026-03-10T14:09:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:45 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:46.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:45 vm09.local ceph-mon[53367]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:46.185 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-10T14:09:46.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[51512]: pgmap v1650: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[51512]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[51512]: osdmap e758: 8 total, 8 up, 8 in 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[51512]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[58955]: pgmap v1650: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[58955]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[58955]: osdmap e758: 8 total, 8 up, 8 in 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2513471240' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:46.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:45 vm05.local ceph-mon[58955]: from='client.50048 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]: dispatch 2026-03-10T14:09:47.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:46 vm05.local ceph-mon[58955]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:47.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:46 vm05.local ceph-mon[58955]: osdmap e759: 8 total, 8 up, 8 in 2026-03-10T14:09:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:46 vm05.local ceph-mon[51512]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:47.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:46 vm05.local ceph-mon[51512]: osdmap e759: 8 total, 8 up, 8 in 2026-03-10T14:09:47.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:46 vm09.local ceph-mon[53367]: from='client.50048 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "app": "rados"}]': finished 2026-03-10T14:09:47.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:46 vm09.local ceph-mon[53367]: osdmap e759: 8 total, 8 up, 8 in 2026-03-10T14:09:48.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:47 vm09.local ceph-mon[53367]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:48.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:47 vm09.local ceph-mon[53367]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:47 vm05.local ceph-mon[51512]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:48.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:47 vm05.local ceph-mon[51512]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:47 vm05.local ceph-mon[58955]: pgmap v1653: 176 pgs: 176 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:09:48.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:47 vm05.local ceph-mon[58955]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:09:49.976 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:09:49.976 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:49 vm09.local ceph-mon[53367]: pgmap v1654: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.5 KiB/s wr, 3 op/s 2026-03-10T14:09:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:49 vm05.local ceph-mon[58955]: pgmap v1654: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.5 KiB/s wr, 3 op/s 2026-03-10T14:09:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:49 vm05.local ceph-mon[51512]: pgmap v1654: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 3.5 KiB/s wr, 3 op/s 2026-03-10T14:09:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:09:51.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:51.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:51.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[58955]: pgmap v1655: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[58955]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_objects: 10) 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[58955]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[58955]: osdmap e760: 8 total, 8 up, 8 in 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[51512]: pgmap v1655: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T14:09:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[51512]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_objects: 10) 2026-03-10T14:09:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[51512]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:09:52.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:51 vm05.local ceph-mon[51512]: osdmap e760: 8 total, 8 up, 8 in 2026-03-10T14:09:52.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:51 vm09.local ceph-mon[53367]: pgmap v1655: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 761 B/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T14:09:52.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:51 vm09.local ceph-mon[53367]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_objects: 10) 2026-03-10T14:09:52.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:51 vm09.local ceph-mon[53367]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:09:52.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:51 vm09.local ceph-mon[53367]: osdmap e760: 8 total, 8 up, 8 in 2026-03-10T14:09:54.094 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:53 vm09.local ceph-mon[53367]: pgmap v1657: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 665 B/s rd, 2.6 KiB/s wr, 1 op/s 2026-03-10T14:09:54.094 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:09:54.094 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:53 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:54.094 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[51512]: pgmap v1657: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 665 B/s rd, 2.6 KiB/s wr, 1 op/s 2026-03-10T14:09:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:09:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:54.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:54.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[58955]: pgmap v1657: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 665 B/s rd, 2.6 KiB/s wr, 1 op/s 2026-03-10T14:09:54.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:09:54.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:54.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:09:55.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:09:55.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:09:56.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:56 vm09.local ceph-mon[53367]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s 2026-03-10T14:09:56.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:56 vm05.local ceph-mon[51512]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s 2026-03-10T14:09:56.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:56 vm05.local ceph-mon[58955]: pgmap v1658: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s 2026-03-10T14:09:58.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:58 vm05.local ceph-mon[51512]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:09:58.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:58 vm05.local ceph-mon[58955]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:09:58.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:58 vm09.local ceph-mon[53367]: pgmap v1659: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:09:59.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:09:59 vm05.local ceph-mon[51512]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:09:59.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:09:59 vm05.local ceph-mon[58955]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:09:59.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:09:59 vm09.local ceph-mon[53367]: pgmap v1660: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:00.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:09:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:00.291 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:09:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:09:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:00.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled; 1 pool(s) full 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: [WRN] POOL_FULL: 1 pool(s) full 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[58955]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (running out of quota) 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled; 1 pool(s) full 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: [WRN] POOL_FULL: 1 pool(s) full 2026-03-10T14:10:00.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:00 vm05.local ceph-mon[51512]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (running out of quota) 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled; 1 pool(s) full 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: [WRN] POOL_FULL: 1 pool(s) full 2026-03-10T14:10:00.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:00 vm09.local ceph-mon[53367]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (running out of quota) 2026-03-10T14:10:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:01 vm05.local ceph-mon[58955]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:01.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:01 vm05.local ceph-mon[51512]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:01.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:01 vm09.local ceph-mon[53367]: pgmap v1661: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:03.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:03 vm05.local ceph-mon[58955]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 930 B/s rd, 0 op/s 2026-03-10T14:10:03.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:03 vm05.local ceph-mon[51512]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 930 B/s rd, 0 op/s 2026-03-10T14:10:03.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:03 vm09.local ceph-mon[53367]: pgmap v1662: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 930 B/s rd, 0 op/s 2026-03-10T14:10:04.924 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:10:04 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=infra.usagestats t=2026-03-10T14:10:04.534647885Z level=info msg="Usage stats are ready to report" 2026-03-10T14:10:05.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:05 vm05.local ceph-mon[58955]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:05.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:05 vm05.local ceph-mon[51512]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:05.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:05 vm09.local ceph-mon[53367]: pgmap v1663: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:07.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:07 vm05.local ceph-mon[58955]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:07.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:07 vm05.local ceph-mon[51512]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:07.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:07 vm09.local ceph-mon[53367]: pgmap v1664: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:09.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:09 vm09.local ceph-mon[53367]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:09.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:09.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:09 vm05.local ceph-mon[58955]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:09 vm05.local ceph-mon[51512]: pgmap v1665: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:09.981 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:10.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:11.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:12.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:11 vm05.local ceph-mon[58955]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:12.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:11 vm05.local ceph-mon[51512]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:12.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:11 vm09.local ceph-mon[53367]: pgmap v1666: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:14.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:13 vm05.local ceph-mon[58955]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:14.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:13 vm05.local ceph-mon[51512]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:14.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:13 vm09.local ceph-mon[53367]: pgmap v1667: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:16.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:15 vm05.local ceph-mon[58955]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:16.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:15 vm05.local ceph-mon[51512]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:16.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:15 vm09.local ceph-mon[53367]: pgmap v1668: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:16.187 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=166263 2026-03-10T14:10:16.187 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_objects 100 2026-03-10T14:10:16.187 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put onemore /etc/passwd 2026-03-10T14:10:16.246 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.245+0000 7fd11235c640 1 -- 192.168.123.105:0/683277457 >> v1:192.168.123.105:6789/0 conn(0x7fd10c077620 legacy=0x7fd10c077a00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:16.246 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- 192.168.123.105:0/683277457 shutdown_connections 2026-03-10T14:10:16.246 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- 192.168.123.105:0/683277457 >> 192.168.123.105:0/683277457 conn(0x7fd10c1005c0 msgr2=0x7fd10c1029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:16.246 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- 192.168.123.105:0/683277457 shutdown_connections 2026-03-10T14:10:16.246 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- 192.168.123.105:0/683277457 wait complete. 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 Processor -- start 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- start start 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd10c1155b0 con 0x7fd10c077620 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.246+0000 7fd11235c640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd10c1157a0 con 0x7fd10c115990 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd11235c640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fd10c1b2300 con 0x7fd10c078110 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd10b7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7fd10c078110 0x7fd10c112ad0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:49248/0 (socket says 192.168.123.105:49248) 2026-03-10T14:10:16.247 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd10b7fe640 1 -- 192.168.123.105:0/2624694834 learned_addr learned my addr 192.168.123.105:0/2624694834 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 859477791 0 0) 0x7fd10c1b2300 con 0x7fd10c078110 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd0e0003620 con 0x7fd10c078110 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2471356072 0 0) 0x7fd10c1157a0 con 0x7fd10c115990 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd10c1b2300 con 0x7fd10c115990 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 4191212918 0 0) 0x7fd10c1155b0 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd10c1157a0 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3177089492 0 0) 0x7fd10c1157a0 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd10c1155b0 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.247+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd0f8002c20 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3184998056 0 0) 0x7fd0e0003620 con 0x7fd10c078110 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd10c1157a0 con 0x7fd10c078110 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd0fc004180 con 0x7fd10c078110 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 974864073 0 0) 0x7fd10c1b2300 con 0x7fd10c115990 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fd0e0003620 con 0x7fd10c115990 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd100003370 con 0x7fd10c115990 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 638000846 0 0) 0x7fd10c1155b0 con 0x7fd10c077620 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 >> v1:192.168.123.105:6790/0 conn(0x7fd10c078110 legacy=0x7fd10c112ad0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 >> v1:192.168.123.109:6789/0 conn(0x7fd10c115990 legacy=0x7fd10c1aebc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:16.248 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd10c1b3500 con 0x7fd10c077620 2026-03-10T14:10:16.250 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fd10c1b24d0 con 0x7fd10c077620 2026-03-10T14:10:16.250 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fd0f8002dc0 con 0x7fd10c077620 2026-03-10T14:10:16.250 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fd0f8004df0 con 0x7fd10c077620 2026-03-10T14:10:16.250 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.248+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fd10c1b2a90 con 0x7fd10c077620 2026-03-10T14:10:16.252 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.250+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd0d0005180 con 0x7fd10c077620 2026-03-10T14:10:16.252 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.250+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7fd0f8004df0 con 0x7fd10c077620 2026-03-10T14:10:16.252 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.251+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(760..760 src has 254..760) ==== 7778+0+0 (unknown 4238707866 0 0) 0x7fd0f8058920 con 0x7fd10c077620 2026-03-10T14:10:16.252 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.251+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=761}) -- 0x7fd10c1155b0 con 0x7fd10c077620 2026-03-10T14:10:16.254 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.254+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fd0f80609c0 con 0x7fd10c077620 2026-03-10T14:10:16.350 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.349+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"} v 0) -- 0x7fd0d0005470 con 0x7fd10c077620 2026-03-10T14:10:16.819 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.819+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v761) ==== 225+0+0 (unknown 420887384 0 0) 0x7fd0f8065900 con 0x7fd10c077620 2026-03-10T14:10:16.838 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.837+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 11 ==== osd_map(761..761 src has 254..761) ==== 628+0+0 (unknown 2373964271 0 0) 0x7fd0f8003440 con 0x7fd10c077620 2026-03-10T14:10:16.838 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.837+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=762}) -- 0x7fd0e0003620 con 0x7fd10c077620 2026-03-10T14:10:16.878 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:16.878+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"} v 0) -- 0x7fd0d00020e0 con 0x7fd10c077620 2026-03-10T14:10:17.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:16 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:17.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:16 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:17.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:16 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:17.822 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.821+0000 7fd1097fa640 1 -- 192.168.123.105:0/2624694834 <== mon.0 v1:192.168.123.105:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v762) ==== 225+0+0 (unknown 1613987792 0 0) 0x7fd0f8092140 con 0x7fd10c077620 2026-03-10T14:10:17.822 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.825+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 >> v1:192.168.123.105:6800/1010796596 conn(0x7fd0e00784c0 legacy=0x7fd0e007a980 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.825+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 >> v1:192.168.123.105:6789/0 conn(0x7fd10c077620 legacy=0x7fd10c1123c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.825+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 shutdown_connections 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.825+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 >> 192.168.123.105:0/2624694834 conn(0x7fd10c1005c0 msgr2=0x7fd10c079a70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.825+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 shutdown_connections 2026-03-10T14:10:17.826 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:17.826+0000 7fd11235c640 1 -- 192.168.123.105:0/2624694834 wait complete. 2026-03-10T14:10:17.841 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 166263 2026-03-10T14:10:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:17 vm09.local ceph-mon[53367]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:18.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:17 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:18.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:17 vm09.local ceph-mon[53367]: osdmap e761: 8 total, 8 up, 8 in 2026-03-10T14:10:18.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:17 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[58955]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[58955]: osdmap e761: 8 total, 8 up, 8 in 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[51512]: pgmap v1669: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[51512]: osdmap e761: 8 total, 8 up, 8 in 2026-03-10T14:10:18.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:17 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T14:10:19.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:18 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:19.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:18 vm05.local ceph-mon[51512]: osdmap e762: 8 total, 8 up, 8 in 2026-03-10T14:10:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:18 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:19.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:18 vm05.local ceph-mon[58955]: osdmap e762: 8 total, 8 up, 8 in 2026-03-10T14:10:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:18 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2624694834' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "100"}]': finished 2026-03-10T14:10:19.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:18 vm09.local ceph-mon[53367]: osdmap e762: 8 total, 8 up, 8 in 2026-03-10T14:10:19.991 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:20.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:19 vm05.local ceph-mon[58955]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:20.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:19 vm05.local ceph-mon[51512]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:20.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:20.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:19 vm09.local ceph-mon[53367]: pgmap v1672: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:21.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:21.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:21.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:21.470 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 0 -ne 0 ']' 2026-03-10T14:10:21.470 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-10T14:10:21.470 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put twomore /etc/passwd 2026-03-10T14:10:21.497 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_bytes 100 2026-03-10T14:10:21.551 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- 192.168.123.105:0/2090303666 >> v1:192.168.123.105:6790/0 conn(0x7ff398108680 legacy=0x7ff398108a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- 192.168.123.105:0/2090303666 shutdown_connections 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- 192.168.123.105:0/2090303666 >> 192.168.123.105:0/2090303666 conn(0x7ff3980fe3b0 msgr2=0x7ff3981007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- 192.168.123.105:0/2090303666 shutdown_connections 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- 192.168.123.105:0/2090303666 wait complete. 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 Processor -- start 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- start start 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff3981a1580 con 0x7ff398108680 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff3981b12e0 con 0x7ff39810f0f0 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.551+0000 7ff39e44f640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff3981b24c0 con 0x7ff39810b520 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3977fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7ff39810b520 0x7ff3981a0390 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:46946/0 (socket says 192.168.123.105:46946) 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3977fe640 1 -- 192.168.123.105:0/3268853402 learned_addr learned my addr 192.168.123.105:0/3268853402 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:21.552 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 943151597 0 0) 0x7ff3981a1580 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff368003620 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3837577994 0 0) 0x7ff368003620 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff3981a1580 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff37c003140 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2839841791 0 0) 0x7ff3981a1580 con 0x7ff398108680 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.552+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 >> v1:192.168.123.105:6790/0 conn(0x7ff39810b520 legacy=0x7ff3981a0390 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.553+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 >> v1:192.168.123.109:6789/0 conn(0x7ff39810f0f0 legacy=0x7ff3981a0c90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:21.553 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.553+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff3981b36a0 con 0x7ff398108680 2026-03-10T14:10:21.554 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.553+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff37c0027f0 con 0x7ff398108680 2026-03-10T14:10:21.554 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.553+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff37c005240 con 0x7ff398108680 2026-03-10T14:10:21.554 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.554+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff3981b1510 con 0x7ff398108680 2026-03-10T14:10:21.554 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.554+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ff3981b1ad0 con 0x7ff398108680 2026-03-10T14:10:21.556 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.555+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7ff37c003b20 con 0x7ff398108680 2026-03-10T14:10:21.556 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.555+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(763..763 src has 254..763) ==== 7778+0+0 (unknown 2790054908 0 0) 0x7ff37c094740 con 0x7ff398108680 2026-03-10T14:10:21.556 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.556+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff398103db0 con 0x7ff398108680 2026-03-10T14:10:21.559 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.559+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff37c060d80 con 0x7ff398108680 2026-03-10T14:10:21.655 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:21.654+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"} v 0) -- 0x7ff398076830 con 0x7ff398108680 2026-03-10T14:10:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[58955]: pgmap v1673: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[58955]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[58955]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[58955]: osdmap e763: 8 total, 8 up, 8 in 2026-03-10T14:10:22.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[51512]: pgmap v1673: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[51512]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[51512]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[51512]: osdmap e763: 8 total, 8 up, 8 in 2026-03-10T14:10:22.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:22 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:22 vm09.local ceph-mon[53367]: pgmap v1673: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:22 vm09.local ceph-mon[53367]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:22 vm09.local ceph-mon[53367]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:22 vm09.local ceph-mon[53367]: osdmap e763: 8 total, 8 up, 8 in 2026-03-10T14:10:22.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:22 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:22.455 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:22.455+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v764) ==== 221+0+0 (unknown 284777146 0 0) 0x7ff37c065cc0 con 0x7ff398108680 2026-03-10T14:10:22.518 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:22.518+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"} v 0) -- 0x7ff398114e90 con 0x7ff398108680 2026-03-10T14:10:23.463 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.462+0000 7ff3957fa640 1 -- 192.168.123.105:0/3268853402 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v765) ==== 221+0+0 (unknown 2427030363 0 0) 0x7ff37c003480 con 0x7ff398108680 2026-03-10T14:10:23.463 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 100 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:10:23.465 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 >> v1:192.168.123.105:6800/1010796596 conn(0x7ff368078180 legacy=0x7ff36807a640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:23.465 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 >> v1:192.168.123.105:6789/0 conn(0x7ff398108680 legacy=0x7ff398076d70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:23.466 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 shutdown_connections 2026-03-10T14:10:23.466 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 >> 192.168.123.105:0/3268853402 conn(0x7ff3980fe3b0 msgr2=0x7ff398112720 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:23.466 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 shutdown_connections 2026-03-10T14:10:23.466 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:23.465+0000 7ff39e44f640 1 -- 192.168.123.105:0/3268853402 wait complete. 2026-03-10T14:10:23.473 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[58955]: osdmap e764: 8 total, 8 up, 8 in 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[58955]: pgmap v1675: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[51512]: osdmap e764: 8 total, 8 up, 8 in 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[51512]: pgmap v1675: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:10:23.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:23 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:23 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:23 vm09.local ceph-mon[53367]: osdmap e764: 8 total, 8 up, 8 in 2026-03-10T14:10:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:23 vm09.local ceph-mon[53367]: pgmap v1675: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T14:10:23.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:23 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T14:10:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:24.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[58955]: osdmap e765: 8 total, 8 up, 8 in 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[51512]: osdmap e765: 8 total, 8 up, 8 in 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:24.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:24 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268853402' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T14:10:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:24 vm09.local ceph-mon[53367]: osdmap e765: 8 total, 8 up, 8 in 2026-03-10T14:10:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:24 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:24.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:25 vm05.local ceph-mon[58955]: pgmap v1678: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:25 vm05.local ceph-mon[51512]: pgmap v1678: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:26.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:25 vm09.local ceph-mon[53367]: pgmap v1678: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[58955]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_bytes: 100 B) 2026-03-10T14:10:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[58955]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:10:27.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[58955]: osdmap e766: 8 total, 8 up, 8 in 2026-03-10T14:10:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[51512]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_bytes: 100 B) 2026-03-10T14:10:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[51512]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:10:27.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:26 vm05.local ceph-mon[51512]: osdmap e766: 8 total, 8 up, 8 in 2026-03-10T14:10:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:26 vm09.local ceph-mon[53367]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' is full (reached quota's max_bytes: 100 B) 2026-03-10T14:10:27.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:26 vm09.local ceph-mon[53367]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:10:27.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:26 vm09.local ceph-mon[53367]: osdmap e766: 8 total, 8 up, 8 in 2026-03-10T14:10:28.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:27 vm05.local ceph-mon[58955]: pgmap v1680: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 818 B/s wr, 1 op/s 2026-03-10T14:10:28.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:27 vm05.local ceph-mon[51512]: pgmap v1680: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 818 B/s wr, 1 op/s 2026-03-10T14:10:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:27 vm09.local ceph-mon[53367]: pgmap v1680: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1022 B/s rd, 818 B/s wr, 1 op/s 2026-03-10T14:10:30.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:29 vm05.local ceph-mon[58955]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:30.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:29 vm05.local ceph-mon[51512]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:30.081 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:30.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:30.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:29 vm09.local ceph-mon[53367]: pgmap v1681: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 1 op/s 2026-03-10T14:10:31.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:31.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:31.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:32.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:31 vm05.local ceph-mon[58955]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 512 B/s wr, 1 op/s 2026-03-10T14:10:32.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:31 vm05.local ceph-mon[51512]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 512 B/s wr, 1 op/s 2026-03-10T14:10:32.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:31 vm09.local ceph-mon[53367]: pgmap v1682: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 512 B/s wr, 1 op/s 2026-03-10T14:10:34.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:33 vm05.local ceph-mon[58955]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 569 B/s rd, 0 op/s 2026-03-10T14:10:34.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:33 vm05.local ceph-mon[51512]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 569 B/s rd, 0 op/s 2026-03-10T14:10:34.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:33 vm09.local ceph-mon[53367]: pgmap v1683: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 569 B/s rd, 0 op/s 2026-03-10T14:10:36.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:35 vm05.local ceph-mon[58955]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:36.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:35 vm05.local ceph-mon[51512]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:36.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:35 vm09.local ceph-mon[53367]: pgmap v1684: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:38.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:38 vm05.local ceph-mon[58955]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:38.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:38 vm05.local ceph-mon[51512]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:38.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:38 vm09.local ceph-mon[53367]: pgmap v1685: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[58955]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[51512]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:39.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:39 vm09.local ceph-mon[53367]: pgmap v1686: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:39 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:40.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:40.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:41.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:41 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:41.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:41 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:41.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:41 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:42.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:42 vm05.local ceph-mon[58955]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:42.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:42 vm05.local ceph-mon[51512]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:42.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:42 vm09.local ceph-mon[53367]: pgmap v1687: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:43.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:43 vm05.local ceph-mon[58955]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:43.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:43 vm05.local ceph-mon[51512]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:43.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:43 vm09.local ceph-mon[53367]: pgmap v1688: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:45.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:45 vm05.local ceph-mon[58955]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:45.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:45 vm05.local ceph-mon[51512]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:45.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:45 vm09.local ceph-mon[53367]: pgmap v1689: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:47.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:47 vm05.local ceph-mon[58955]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:47.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:47 vm05.local ceph-mon[51512]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:47.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:47 vm09.local ceph-mon[53367]: pgmap v1690: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:49.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:49 vm09.local ceph-mon[53367]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:49.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:49 vm05.local ceph-mon[58955]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:49 vm05.local ceph-mon[51512]: pgmap v1691: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:50.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:10:50.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:51.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:51.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:10:51.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:51 vm09.local ceph-mon[53367]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:52.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:51 vm05.local ceph-mon[58955]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:52.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:51 vm05.local ceph-mon[51512]: pgmap v1692: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:53.475 INFO:tasks.workunit.client.0.vm05.stderr:+ pid=166350 2026-03-10T14:10:53.475 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_bytes 0 2026-03-10T14:10:53.475 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put two /etc/passwd 2026-03-10T14:10:53.528 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- 192.168.123.105:0/3738813400 >> v1:192.168.123.105:6789/0 conn(0x7f975010d7b0 legacy=0x7f975010fba0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:53.528 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- 192.168.123.105:0/3738813400 shutdown_connections 2026-03-10T14:10:53.528 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- 192.168.123.105:0/3738813400 >> 192.168.123.105:0/3738813400 conn(0x7f97501005c0 msgr2=0x7f97501029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:53.528 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- 192.168.123.105:0/3738813400 shutdown_connections 2026-03-10T14:10:53.528 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- 192.168.123.105:0/3738813400 wait complete. 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 Processor -- start 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- start start 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f97501ab6c0 con 0x7f9750111380 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f97501ac8c0 con 0x7f975010a910 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.528+0000 7f9757de3640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f97501adac0 con 0x7f975010d7b0 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f9755b58640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f975010a910 0x7f9750109b00 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:52190/0 (socket says 192.168.123.105:52190) 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f9755b58640 1 -- 192.168.123.105:0/1432036116 learned_addr learned my addr 192.168.123.105:0/1432036116 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2058760496 0 0) 0x7f97501adac0 con 0x7f975010d7b0 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f9724003620 con 0x7f975010d7b0 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2980186658 0 0) 0x7f9724003620 con 0x7f975010d7b0 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f97501adac0 con 0x7f975010d7b0 2026-03-10T14:10:53.529 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9744002c20 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3518769783 0 0) 0x7f97501adac0 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 >> v1:192.168.123.109:6789/0 conn(0x7f975010a910 legacy=0x7f9750109b00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 >> v1:192.168.123.105:6789/0 conn(0x7f9750111380 legacy=0x7f97501a9dc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f97501aecc0 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f97501acaf0 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f97501ad0b0 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f9744003d20 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.529+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f9744004fb0 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.530+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f974401d840 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.531+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(766..766 src has 254..766) ==== 7778+0+0 (unknown 1232544342 0 0) 0x7f9744094630 con 0x7f975010d7b0 2026-03-10T14:10:53.531 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.531+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=767}) -- 0x7f97501adac0 con 0x7f975010d7b0 2026-03-10T14:10:53.532 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.531+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9718005180 con 0x7f975010d7b0 2026-03-10T14:10:53.535 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.534+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f9744061a90 con 0x7f975010d7b0 2026-03-10T14:10:53.630 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:53.630+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"} v 0) -- 0x7f9718005470 con 0x7f975010d7b0 2026-03-10T14:10:53.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:53 vm09.local ceph-mon[53367]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:54.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:53 vm05.local ceph-mon[58955]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:54.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:53 vm05.local ceph-mon[51512]: pgmap v1693: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:10:54.605 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:54.604+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(767..767 src has 254..767) ==== 628+0+0 (unknown 2459883423 0 0) 0x7f9744003420 con 0x7f975010d7b0 2026-03-10T14:10:54.605 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:54.604+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=768}) -- 0x7f9724003620 con 0x7f975010d7b0 2026-03-10T14:10:54.610 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:54.609+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v767) ==== 217+0+0 (unknown 1739253850 0 0) 0x7f97440669d0 con 0x7f975010d7b0 2026-03-10T14:10:54.674 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:54.672+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"} v 0) -- 0x7f9718005d40 con 0x7f975010d7b0 2026-03-10T14:10:54.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:54 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:54.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:54 vm09.local ceph-mon[53367]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:54.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:54 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[58955]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[51512]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:54 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:10:55.637 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.636+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 12 ==== osd_map(768..768 src has 254..768) ==== 628+0+0 (unknown 4251800784 0 0) 0x7f97440599f0 con 0x7f975010d7b0 2026-03-10T14:10:55.637 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.636+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=769}) -- 0x7f97240855f0 con 0x7f975010d7b0 2026-03-10T14:10:55.637 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.636+0000 7f973effd640 1 -- 192.168.123.105:0/1432036116 <== mon.2 v1:192.168.123.105:6790/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v768) ==== 217+0+0 (unknown 3578061628 0 0) 0x7f9744092430 con 0x7f975010d7b0 2026-03-10T14:10:55.638 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.638+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 >> v1:192.168.123.105:6800/1010796596 conn(0x7f9724078270 legacy=0x7f972407a730 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.639+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 >> v1:192.168.123.105:6790/0 conn(0x7f975010d7b0 legacy=0x7f97501a6650 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.639+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 shutdown_connections 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.639+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 >> 192.168.123.105:0/1432036116 conn(0x7f97501005c0 msgr2=0x7f9750111810 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.639+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 shutdown_connections 2026-03-10T14:10:55.639 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.639+0000 7f9757de3640 1 -- 192.168.123.105:0/1432036116 wait complete. 2026-03-10T14:10:55.649 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_objects 0 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- 192.168.123.105:0/3928663550 >> v1:192.168.123.105:6789/0 conn(0x7f8f8c101f20 legacy=0x7f8f8c10fc30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- 192.168.123.105:0/3928663550 shutdown_connections 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- 192.168.123.105:0/3928663550 >> 192.168.123.105:0/3928663550 conn(0x7f8f8c0fd220 msgr2=0x7f8f8c0ff640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- 192.168.123.105:0/3928663550 shutdown_connections 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- 192.168.123.105:0/3928663550 wait complete. 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 Processor -- start 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.701+0000 7f8f935f7640 1 -- start start 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f935f7640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f8c1ab8c0 con 0x7f8f8c101f20 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f935f7640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f8c1acac0 con 0x7f8f8c101430 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f935f7640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f8f8c1adcc0 con 0x7f8f8c1113e0 2026-03-10T14:10:55.702 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f91b6d640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f8f8c1113e0 0x7f8f8c1a9fc0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:35970/0 (socket says 192.168.123.105:35970) 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f91b6d640 1 -- 192.168.123.105:0/487085938 learned_addr learned my addr 192.168.123.105:0/487085938 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 6998451 0 0) 0x7f8f8c1ab8c0 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8f5c003620 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 778422864 0 0) 0x7f8f8c1acac0 con 0x7f8f8c101430 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f8f8c1ab8c0 con 0x7f8f8c101430 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 385399279 0 0) 0x7f8f5c003620 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f8f8c1acac0 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8f80002d60 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 1333153633 0 0) 0x7f8f8c1acac0 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.702+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 >> v1:192.168.123.105:6790/0 conn(0x7f8f8c1113e0 legacy=0x7f8f8c1a9fc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.703+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 >> v1:192.168.123.109:6789/0 conn(0x7f8f8c101430 legacy=0x7f8f8c110c30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.703+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8f8c1aeec0 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.703+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f8f8c1accf0 con 0x7f8f8c101f20 2026-03-10T14:10:55.703 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.703+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f8f8c1ad2b0 con 0x7f8f8c101f20 2026-03-10T14:10:55.705 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.704+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f8f80003270 con 0x7f8f8c101f20 2026-03-10T14:10:55.705 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.704+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f8f80004e40 con 0x7f8f8c101f20 2026-03-10T14:10:55.707 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.705+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f8f8001d7b0 con 0x7f8f8c101f20 2026-03-10T14:10:55.707 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.705+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8f58005180 con 0x7f8f8c101f20 2026-03-10T14:10:55.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.707+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(768..768 src has 254..768) ==== 7778+0+0 (unknown 2823908750 0 0) 0x7f8f80095330 con 0x7f8f8c101f20 2026-03-10T14:10:55.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.707+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=769}) -- 0x7f8f8c1acac0 con 0x7f8f8c101f20 2026-03-10T14:10:55.710 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.709+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f8f80061ae0 con 0x7f8f8c101f20 2026-03-10T14:10:55.811 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:55.810+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"} v 0) -- 0x7f8f58005470 con 0x7f8f8c101f20 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: osdmap e767: 8 total, 8 up, 8 in 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:10:55.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: osdmap e767: 8 total, 8 up, 8 in 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: pgmap v1694: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: osdmap e767: 8 total, 8 up, 8 in 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1432036116' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='client.50180 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:10:56.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:10:56.465 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:56.464+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v769) ==== 221+0+0 (unknown 3298682756 0 0) 0x7f8f800668f0 con 0x7f8f8c101f20 2026-03-10T14:10:56.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:56.470+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 11 ==== osd_map(769..769 src has 254..769) ==== 628+0+0 (unknown 889328394 0 0) 0x7f8f80059a40 con 0x7f8f8c101f20 2026-03-10T14:10:56.471 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:56.470+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=770}) -- 0x7f8f5c003620 con 0x7f8f8c101f20 2026-03-10T14:10:56.525 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:56.525+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"} v 0) -- 0x7f8f580028a0 con 0x7f8f8c101f20 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: osdmap e768: 8 total, 8 up, 8 in 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: osdmap e769: 8 total, 8 up, 8 in 2026-03-10T14:10:56.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:56 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: osdmap e768: 8 total, 8 up, 8 in 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: osdmap e769: 8 total, 8 up, 8 in 2026-03-10T14:10:57.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: from='client.50180 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: osdmap e768: 8 total, 8 up, 8 in 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' no longer out of quota; removing NO_QUOTA flag 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: osdmap e769: 8 total, 8 up, 8 in 2026-03-10T14:10:57.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:56 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:10:57.477 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.476+0000 7f8f7a7fc640 1 -- 192.168.123.105:0/487085938 <== mon.0 v1:192.168.123.105:6789/0 12 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v770) ==== 221+0+0 (unknown 1098965846 0 0) 0x7f8f800930f0 con 0x7f8f8c101f20 2026-03-10T14:10:57.477 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:10:57.479 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.479+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 >> v1:192.168.123.105:6800/1010796596 conn(0x7f8f5c0783e0 legacy=0x7f8f5c07a8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:57.479 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.479+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 >> v1:192.168.123.105:6789/0 conn(0x7f8f8c101f20 legacy=0x7f8f8c1a6690 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:57.480 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.480+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 shutdown_connections 2026-03-10T14:10:57.480 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.480+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 >> 192.168.123.105:0/487085938 conn(0x7f8f8c0fd220 msgr2=0x7f8f8c0ff610 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:57.480 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.480+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 shutdown_connections 2026-03-10T14:10:57.480 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.480+0000 7f8f935f7640 1 -- 192.168.123.105:0/487085938 wait complete. 2026-03-10T14:10:57.490 INFO:tasks.workunit.client.0.vm05.stderr:+ wait 166350 2026-03-10T14:10:57.490 INFO:tasks.workunit.client.0.vm05.stderr:+ '[' 0 -ne 0 ']' 2026-03-10T14:10:57.490 INFO:tasks.workunit.client.0.vm05.stderr:+ true 2026-03-10T14:10:57.490 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put three /etc/passwd 2026-03-10T14:10:57.519 INFO:tasks.workunit.client.0.vm05.stderr:++ uuidgen 2026-03-10T14:10:57.519 INFO:tasks.workunit.client.0.vm05.stderr:+ pp=ffbc96f0-d53c-4a94-9954-47f277c886bf 2026-03-10T14:10:57.519 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool create ffbc96f0-d53c-4a94-9954-47f277c886bf 12 2026-03-10T14:10:57.572 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.571+0000 7f2aba42b640 1 -- 192.168.123.105:0/149200937 >> v1:192.168.123.105:6790/0 conn(0x7f2ab4115990 legacy=0x7f2ab4117d80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:57.572 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- 192.168.123.105:0/149200937 shutdown_connections 2026-03-10T14:10:57.572 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- 192.168.123.105:0/149200937 >> 192.168.123.105:0/149200937 conn(0x7f2ab41005c0 msgr2=0x7f2ab41029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- 192.168.123.105:0/149200937 shutdown_connections 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- 192.168.123.105:0/149200937 wait complete. 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 Processor -- start 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- start start 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2ab41afe00 con 0x7f2ab4115990 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2ab41b1000 con 0x7f2ab4078110 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2aba42b640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f2ab41b2200 con 0x7f2ab4077620 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2ab37fe640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f2ab4078110 0x7f2ab41aab90 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:43586/0 (socket says 192.168.123.105:43586) 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.572+0000 7f2ab37fe640 1 -- 192.168.123.105:0/586774052 learned_addr learned my addr 192.168.123.105:0/586774052 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1890499613 0 0) 0x7f2ab41b1000 con 0x7f2ab4078110 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2a88003620 con 0x7f2ab4078110 2026-03-10T14:10:57.573 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1615946888 0 0) 0x7f2ab41afe00 con 0x7f2ab4115990 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2ab41b1000 con 0x7f2ab4115990 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3533942314 0 0) 0x7f2ab41b2200 con 0x7f2ab4077620 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2ab41afe00 con 0x7f2ab4077620 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 221872169 0 0) 0x7f2a88003620 con 0x7f2ab4078110 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2ab41b2200 con 0x7f2ab4078110 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3801581356 0 0) 0x7f2ab41b1000 con 0x7f2ab4115990 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.573+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2a88003620 con 0x7f2ab4115990 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1393087810 0 0) 0x7f2ab41afe00 con 0x7f2ab4077620 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f2ab41b1000 con 0x7f2ab4077620 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2aa00031f0 con 0x7f2ab4078110 2026-03-10T14:10:57.574 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2aa8003920 con 0x7f2ab4115990 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2aa4002f80 con 0x7f2ab4077620 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 3312600523 0 0) 0x7f2ab41b2200 con 0x7f2ab4078110 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 >> v1:192.168.123.105:6790/0 conn(0x7f2ab4077620 legacy=0x7f2ab4115210 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 >> v1:192.168.123.105:6789/0 conn(0x7f2ab4115990 legacy=0x7f2ab41ae500 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2ab41b3400 con 0x7f2ab4078110 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f2ab41b0030 con 0x7f2ab4078110 2026-03-10T14:10:57.575 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.574+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7f2ab41b0550 con 0x7f2ab4078110 2026-03-10T14:10:57.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.575+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f2aa0003b90 con 0x7f2ab4078110 2026-03-10T14:10:57.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.575+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f2aa0005c00 con 0x7f2ab4078110 2026-03-10T14:10:57.576 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.575+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2ab4078d60 con 0x7f2ab4078110 2026-03-10T14:10:57.577 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.576+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f2aa0003740 con 0x7f2ab4078110 2026-03-10T14:10:57.577 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.577+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(770..770 src has 254..770) ==== 7778+0+0 (unknown 3303077834 0 0) 0x7f2aa0095680 con 0x7f2ab4078110 2026-03-10T14:10:57.579 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.579+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f2aa0061cc0 con 0x7f2ab4078110 2026-03-10T14:10:57.687 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:57.686+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12} v 0) -- 0x7f2ab411a430 con 0x7f2ab4078110 2026-03-10T14:10:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:57 vm09.local ceph-mon[53367]: pgmap v1697: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:10:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:57 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:57.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:57 vm09.local ceph-mon[53367]: osdmap e770: 8 total, 8 up, 8 in 2026-03-10T14:10:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[58955]: pgmap v1697: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:10:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:58.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[58955]: osdmap e770: 8 total, 8 up, 8 in 2026-03-10T14:10:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[51512]: pgmap v1697: 176 pgs: 176 active+clean; 473 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T14:10:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/487085938' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:10:58.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:57 vm05.local ceph-mon[51512]: osdmap e770: 8 total, 8 up, 8 in 2026-03-10T14:10:58.644 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.643+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]=0 pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' created v771) ==== 176+0+0 (unknown 2298558016 0 0) 0x7f2aa0066c00 con 0x7f2ab4078110 2026-03-10T14:10:58.704 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.704+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12} v 0) -- 0x7f2ab41b0840 con 0x7f2ab4078110 2026-03-10T14:10:58.705 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.705+0000 7f2ab17fa640 1 -- 192.168.123.105:0/586774052 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]=0 pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' already exists v771) ==== 183+0+0 (unknown 371106642 0 0) 0x7f2aa0059c20 con 0x7f2ab4078110 2026-03-10T14:10:58.705 INFO:tasks.workunit.client.0.vm05.stderr:pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' already exists 2026-03-10T14:10:58.707 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 >> v1:192.168.123.105:6800/1010796596 conn(0x7f2a88078980 legacy=0x7f2a8807ae40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:58.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 >> v1:192.168.123.109:6789/0 conn(0x7f2ab4078110 legacy=0x7f2ab41aab90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:58.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 shutdown_connections 2026-03-10T14:10:58.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 >> 192.168.123.105:0/586774052 conn(0x7f2ab41005c0 msgr2=0x7f2ab4079a70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:58.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 shutdown_connections 2026-03-10T14:10:58.708 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.707+0000 7f2aba42b640 1 -- 192.168.123.105:0/586774052 wait complete. 2026-03-10T14:10:58.715 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool application enable ffbc96f0-d53c-4a94-9954-47f277c886bf rados 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.767+0000 7f51965b6640 1 -- 192.168.123.105:0/3962614643 >> v1:192.168.123.105:6790/0 conn(0x7f5190115990 legacy=0x7f5190117d80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.767+0000 7f51965b6640 1 -- 192.168.123.105:0/3962614643 shutdown_connections 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.767+0000 7f51965b6640 1 -- 192.168.123.105:0/3962614643 >> 192.168.123.105:0/3962614643 conn(0x7f51901005c0 msgr2=0x7f51901029e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.767+0000 7f51965b6640 1 -- 192.168.123.105:0/3962614643 shutdown_connections 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.767+0000 7f51965b6640 1 -- 192.168.123.105:0/3962614643 wait complete. 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f51965b6640 1 Processor -- start 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f51965b6640 1 -- start start 2026-03-10T14:10:58.768 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f51965b6640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f51901aff80 con 0x7f5190077620 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f51965b6640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f51901b1180 con 0x7f5190078110 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f51965b6640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f51901b2380 con 0x7f5190115990 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f5194b2c640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f5190115990 0x7f51901ae700 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:49282/0 (socket says 192.168.123.105:49282) 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.768+0000 7f5194b2c640 1 -- 192.168.123.105:0/4228580875 learned_addr learned my addr 192.168.123.105:0/4228580875 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 392412776 0 0) 0x7f51901b2380 con 0x7f5190115990 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5158003620 con 0x7f5190115990 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3418934574 0 0) 0x7f51901aff80 con 0x7f5190077620 2026-03-10T14:10:58.769 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f51901b2380 con 0x7f5190077620 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 41597431 0 0) 0x7f51901b1180 con 0x7f5190078110 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f51901aff80 con 0x7f5190078110 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2649182695 0 0) 0x7f5158003620 con 0x7f5190115990 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f51901b1180 con 0x7f5190115990 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2506883312 0 0) 0x7f51901b2380 con 0x7f5190077620 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f5158003620 con 0x7f5190077620 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5184003600 con 0x7f5190115990 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5180002ef0 con 0x7f5190077620 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1590877007 0 0) 0x7f51901aff80 con 0x7f5190078110 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f51901b2380 con 0x7f5190078110 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.769+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 471660883 0 0) 0x7f51901b1180 con 0x7f5190115990 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.770+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 >> v1:192.168.123.109:6789/0 conn(0x7f5190078110 legacy=0x7f5190111fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.770+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 >> v1:192.168.123.105:6789/0 conn(0x7f5190077620 legacy=0x7f51901118c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.770+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f51901b3580 con 0x7f5190115990 2026-03-10T14:10:58.770 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.770+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f51901b13b0 con 0x7f5190115990 2026-03-10T14:10:58.772 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.771+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f51901b1920 con 0x7f5190115990 2026-03-10T14:10:58.772 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.771+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f51840037a0 con 0x7f5190115990 2026-03-10T14:10:58.772 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.771+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f5184005180 con 0x7f5190115990 2026-03-10T14:10:58.773 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.771+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f5184005400 con 0x7f5190115990 2026-03-10T14:10:58.773 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.772+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5190078d60 con 0x7f5190115990 2026-03-10T14:10:58.773 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.772+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(771..771 src has 254..771) ==== 8153+0+0 (unknown 1069228152 0 0) 0x7f518405ac30 con 0x7f5190115990 2026-03-10T14:10:58.775 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.775+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f5184062cd0 con 0x7f5190115990 2026-03-10T14:10:58.878 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:58.877+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"} v 0) -- 0x7f519011ce30 con 0x7f5190115990 2026-03-10T14:10:58.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:58 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:58.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:58 vm09.local ceph-mon[53367]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:58 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:58 vm05.local ceph-mon[51512]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:58 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:58 vm05.local ceph-mon[58955]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.660 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:59.659+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 10 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]=0 enabled application 'rados' on pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' v772) ==== 213+0+0 (unknown 1818813493 0 0) 0x7f5184067c10 con 0x7f5190115990 2026-03-10T14:10:59.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:10:59.721+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"} v 0) -- 0x7f51901b1c60 con 0x7f5190115990 2026-03-10T14:10:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: pgmap v1700: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:10:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: from='client.50719 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]': finished 2026-03-10T14:10:59.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: osdmap e771: 8 total, 8 up, 8 in 2026-03-10T14:10:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:10:59.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-mon[53367]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:10:59.924 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:10:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: pgmap v1700: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: from='client.50719 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]': finished 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: osdmap e771: 8 total, 8 up, 8 in 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[51512]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: pgmap v1700: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: from='client.50719 ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]': finished 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: osdmap e771: 8 total, 8 up, 8 in 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/586774052' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: from='client.50719 ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pg_num": 12}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:10:59.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-mon[58955]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:00.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:10:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:10:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:00.657 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.656+0000 7f518d7fa640 1 -- 192.168.123.105:0/4228580875 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]=0 enabled application 'rados' on pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' v773) ==== 213+0+0 (unknown 1447629523 0 0) 0x7f5184094450 con 0x7f5190115990 2026-03-10T14:11:00.657 INFO:tasks.workunit.client.0.vm05.stderr:enabled application 'rados' on pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 >> v1:192.168.123.105:6800/1010796596 conn(0x7f51580787d0 legacy=0x7f515807ac90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 >> v1:192.168.123.105:6790/0 conn(0x7f5190115990 legacy=0x7f51901ae700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 shutdown_connections 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 >> 192.168.123.105:0/4228580875 conn(0x7f51901005c0 msgr2=0x7f5190118d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 shutdown_connections 2026-03-10T14:11:00.659 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.658+0000 7f51965b6640 1 -- 192.168.123.105:0/4228580875 wait complete. 2026-03-10T14:11:00.667 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota ffbc96f0-d53c-4a94-9954-47f277c886bf max_objects 10 2026-03-10T14:11:00.720 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- 192.168.123.105:0/3575646500 >> v1:192.168.123.105:6789/0 conn(0x7ff10010f180 legacy=0x7ff100111620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- 192.168.123.105:0/3575646500 shutdown_connections 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- 192.168.123.105:0/3575646500 >> 192.168.123.105:0/3575646500 conn(0x7ff1000fc510 msgr2=0x7ff1000fe930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- 192.168.123.105:0/3575646500 shutdown_connections 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- 192.168.123.105:0/3575646500 wait complete. 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 Processor -- start 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.720+0000 7ff107dcc640 1 -- start start 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff107dcc640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff1001ab920 con 0x7ff10010f180 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff107dcc640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff1001acb20 con 0x7ff10010b5b0 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff107dcc640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7ff1001add20 con 0x7ff100100930 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff106342640 1 --1- >> v1:192.168.123.105:6789/0 conn(0x7ff10010f180 0x7ff1001aa070 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6789/0 says I am v1:192.168.123.105:50446/0 (socket says 192.168.123.105:50446) 2026-03-10T14:11:00.721 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff106342640 1 -- 192.168.123.105:0/1429411484 learned_addr learned my addr 192.168.123.105:0/1429411484 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3830525533 0 0) 0x7ff1001ab920 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff0dc003620 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3666414843 0 0) 0x7ff1001acb20 con 0x7ff10010b5b0 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff1001ab920 con 0x7ff10010b5b0 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 918344434 0 0) 0x7ff0dc003620 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7ff1001acb20 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff0fc003110 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2853787181 0 0) 0x7ff1001add20 con 0x7ff100100930 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff0dc003620 con 0x7ff100100930 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 417786186 0 0) 0x7ff1001acb20 con 0x7ff10010f180 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 >> v1:192.168.123.105:6790/0 conn(0x7ff100100930 legacy=0x7ff1001079e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.721+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 >> v1:192.168.123.109:6789/0 conn(0x7ff10010b5b0 legacy=0x7ff1001a6940 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:00.722 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.722+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff1001aef20 con 0x7ff10010f180 2026-03-10T14:11:00.723 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.722+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7ff1001acd50 con 0x7ff10010f180 2026-03-10T14:11:00.723 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.722+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7ff0fc003d90 con 0x7ff10010f180 2026-03-10T14:11:00.723 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.722+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7ff0fc005130 con 0x7ff10010f180 2026-03-10T14:11:00.723 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.722+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7ff1001ad390 con 0x7ff10010f180 2026-03-10T14:11:00.723 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.723+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff0c8005180 con 0x7ff10010f180 2026-03-10T14:11:00.727 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.723+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7ff0fc003940 con 0x7ff10010f180 2026-03-10T14:11:00.727 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.725+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 8 ==== osd_map(773..773 src has 254..773) ==== 8166+0+0 (unknown 1585163590 0 0) 0x7ff0fc05a070 con 0x7ff10010f180 2026-03-10T14:11:00.727 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.727+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7ff0fc062110 con 0x7ff10010f180 2026-03-10T14:11:00.825 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:00.825+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"} v 0) -- 0x7ff0c8005470 con 0x7ff10010f180 2026-03-10T14:11:01.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: osdmap e772: 8 total, 8 up, 8 in 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[58955]: osdmap e773: 8 total, 8 up, 8 in 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: osdmap e772: 8 total, 8 up, 8 in 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:00 vm05.local ceph-mon[51512]: osdmap e773: 8 total, 8 up, 8 in 2026-03-10T14:11:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: osdmap e772: 8 total, 8 up, 8 in 2026-03-10T14:11:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/4228580875' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: from='client.50207 ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]: dispatch 2026-03-10T14:11:01.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: from='client.50207 ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "app": "rados"}]': finished 2026-03-10T14:11:01.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:00 vm09.local ceph-mon[53367]: osdmap e773: 8 total, 8 up, 8 in 2026-03-10T14:11:01.683 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:01.682+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool ffbc96f0-d53c-4a94-9954-47f277c886bf v774) ==== 223+0+0 (unknown 1921443095 0 0) 0x7ff0fc067050 con 0x7ff10010f180 2026-03-10T14:11:01.744 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:01.743+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 --> v1:192.168.123.105:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"} v 0) -- 0x7ff0c80020e0 con 0x7ff10010f180 2026-03-10T14:11:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:01 vm05.local ceph-mon[51512]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:11:02.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:01 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:01 vm05.local ceph-mon[58955]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:11:02.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:01 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:02.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:01 vm09.local ceph-mon[53367]: pgmap v1703: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T14:11:02.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:01 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:02.690 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.689+0000 7ff0eeffd640 1 -- 192.168.123.105:0/1429411484 <== mon.0 v1:192.168.123.105:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool ffbc96f0-d53c-4a94-9954-47f277c886bf v775) ==== 223+0+0 (unknown 1002651751 0 0) 0x7ff0fc093890 con 0x7ff10010f180 2026-03-10T14:11:02.690 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 10 for pool ffbc96f0-d53c-4a94-9954-47f277c886bf 2026-03-10T14:11:02.692 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 >> v1:192.168.123.105:6800/1010796596 conn(0x7ff0dc078140 legacy=0x7ff0dc07a600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:02.692 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 >> v1:192.168.123.105:6789/0 conn(0x7ff10010f180 legacy=0x7ff1001aa070 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:11:02.693 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 shutdown_connections 2026-03-10T14:11:02.693 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 >> 192.168.123.105:0/1429411484 conn(0x7ff1000fc510 msgr2=0x7ff1000fc8f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:11:02.693 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 shutdown_connections 2026-03-10T14:11:02.693 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:11:02.692+0000 7ff107dcc640 1 -- 192.168.123.105:0/1429411484 wait complete. 2026-03-10T14:11:02.702 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[51512]: osdmap e774: 8 total, 8 up, 8 in 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[58955]: osdmap e774: 8 total, 8 up, 8 in 2026-03-10T14:11:03.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:02 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:02 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:02 vm09.local ceph-mon[53367]: osdmap e774: 8 total, 8 up, 8 in 2026-03-10T14:11:03.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:02 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[51512]: pgmap v1706: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[51512]: osdmap e775: 8 total, 8 up, 8 in 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[58955]: pgmap v1706: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:04.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:03 vm05.local ceph-mon[58955]: osdmap e775: 8 total, 8 up, 8 in 2026-03-10T14:11:04.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:03 vm09.local ceph-mon[53367]: pgmap v1706: 188 pgs: 12 unknown, 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T14:11:04.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:03 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1429411484' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "field": "max_objects", "val": "10"}]': finished 2026-03-10T14:11:04.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:03 vm09.local ceph-mon[53367]: osdmap e775: 8 total, 8 up, 8 in 2026-03-10T14:11:06.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:05 vm05.local ceph-mon[51512]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:11:06.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:05 vm05.local ceph-mon[58955]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:11:06.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:05 vm09.local ceph-mon[53367]: pgmap v1708: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T14:11:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:08 vm05.local ceph-mon[51512]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:08.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:08 vm05.local ceph-mon[58955]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:08.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:08 vm09.local ceph-mon[53367]: pgmap v1709: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[51512]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[58955]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:09.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:09 vm09.local ceph-mon[53367]: pgmap v1710: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:09 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:10.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:10.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:11:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:11.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:10 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:11.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:10 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:11.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:10 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:12.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:12 vm05.local ceph-mon[51512]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:11:12.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:12 vm05.local ceph-mon[58955]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:11:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:12 vm09.local ceph-mon[53367]: pgmap v1711: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:11:14.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:14 vm09.local ceph-mon[53367]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:14.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:14 vm05.local ceph-mon[51512]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:14.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:14 vm05.local ceph-mon[58955]: pgmap v1712: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:16.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:16 vm05.local ceph-mon[51512]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:16.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:16 vm05.local ceph-mon[58955]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:16.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:16 vm09.local ceph-mon[53367]: pgmap v1713: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:17.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:17 vm05.local ceph-mon[51512]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:17.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:17 vm05.local ceph-mon[58955]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:17.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:17 vm09.local ceph-mon[53367]: pgmap v1714: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:19.980 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:19 vm05.local ceph-mon[51512]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:19.980 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:19 vm05.local ceph-mon[58955]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:20.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:19 vm09.local ceph-mon[53367]: pgmap v1715: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:20.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:11:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:21.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:21.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:21.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:22.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:21 vm05.local ceph-mon[51512]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:22.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:21 vm05.local ceph-mon[58955]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:21 vm09.local ceph-mon[53367]: pgmap v1716: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:23 vm05.local ceph-mon[58955]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:24.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:23 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:23 vm05.local ceph-mon[51512]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:24.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:23 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:23 vm09.local ceph-mon[53367]: pgmap v1717: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:23 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:26.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:25 vm05.local ceph-mon[58955]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:26.081 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:25 vm05.local ceph-mon[51512]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:25 vm09.local ceph-mon[53367]: pgmap v1718: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:27.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:27 vm05.local ceph-mon[58955]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:27.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:27 vm05.local ceph-mon[51512]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:27.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:27 vm09.local ceph-mon[53367]: pgmap v1719: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:29.795 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:29 vm09.local ceph-mon[53367]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:29.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:29 vm05.local ceph-mon[58955]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:29.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:29 vm05.local ceph-mon[51512]: pgmap v1720: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:30.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:11:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:30.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:30.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:30.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:31.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:31 vm05.local ceph-mon[58955]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:31.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:31 vm05.local ceph-mon[51512]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:31.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:31 vm09.local ceph-mon[53367]: pgmap v1721: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:32.704 INFO:tasks.workunit.client.0.vm05.stderr:++ seq 1 10 2026-03-10T14:11:32.705 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.705 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj1 /etc/passwd 2026-03-10T14:11:32.752 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.752 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj2 /etc/passwd 2026-03-10T14:11:32.778 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.779 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj3 /etc/passwd 2026-03-10T14:11:32.810 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.810 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj4 /etc/passwd 2026-03-10T14:11:32.837 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.837 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj5 /etc/passwd 2026-03-10T14:11:32.865 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.865 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj6 /etc/passwd 2026-03-10T14:11:32.890 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.891 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj7 /etc/passwd 2026-03-10T14:11:32.920 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.920 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj8 /etc/passwd 2026-03-10T14:11:32.946 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.946 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj9 /etc/passwd 2026-03-10T14:11:32.974 INFO:tasks.workunit.client.0.vm05.stderr:+ for f in `seq 1 10` 2026-03-10T14:11:32.974 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p ffbc96f0-d53c-4a94-9954-47f277c886bf put obj10 /etc/passwd 2026-03-10T14:11:33.002 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-10T14:11:33.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:33 vm05.local ceph-mon[58955]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:33.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:33 vm05.local ceph-mon[51512]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:33.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:33 vm09.local ceph-mon[53367]: pgmap v1722: 188 pgs: 188 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:35.832 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:35 vm05.local ceph-mon[58955]: pgmap v1723: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T14:11:35.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:35 vm05.local ceph-mon[51512]: pgmap v1723: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T14:11:35.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:35 vm09.local ceph-mon[53367]: pgmap v1723: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 2 op/s 2026-03-10T14:11:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[58955]: pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' is full (reached quota's max_objects: 10) 2026-03-10T14:11:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[58955]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:11:36.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[58955]: osdmap e776: 8 total, 8 up, 8 in 2026-03-10T14:11:36.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[51512]: pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' is full (reached quota's max_objects: 10) 2026-03-10T14:11:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[51512]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:11:36.832 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:36 vm05.local ceph-mon[51512]: osdmap e776: 8 total, 8 up, 8 in 2026-03-10T14:11:36.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:36 vm09.local ceph-mon[53367]: pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' is full (reached quota's max_objects: 10) 2026-03-10T14:11:36.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:36 vm09.local ceph-mon[53367]: Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T14:11:36.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:36 vm09.local ceph-mon[53367]: osdmap e776: 8 total, 8 up, 8 in 2026-03-10T14:11:37.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:37 vm05.local ceph-mon[58955]: pgmap v1724: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-10T14:11:37.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:37 vm05.local ceph-mon[51512]: pgmap v1724: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-10T14:11:37.924 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:37 vm09.local ceph-mon[53367]: pgmap v1724: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1.7 KiB/s wr, 1 op/s 2026-03-10T14:11:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:39 vm09.local ceph-mon[53367]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:39 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:40.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:40.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:11:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[58955]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:40.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[51512]: pgmap v1726: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:40.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:40.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:41.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:40 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:41.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:40 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:41.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:40 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:42.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:41 vm09.local ceph-mon[53367]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:42.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:41 vm05.local ceph-mon[58955]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:42.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:41 vm05.local ceph-mon[51512]: pgmap v1727: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:44.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:43 vm09.local ceph-mon[53367]: pgmap v1728: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:44.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:43 vm05.local ceph-mon[58955]: pgmap v1728: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:44.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:43 vm05.local ceph-mon[51512]: pgmap v1728: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s 2026-03-10T14:11:46.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:45 vm09.local ceph-mon[53367]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:46.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:45 vm05.local ceph-mon[58955]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:46.367 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:45 vm05.local ceph-mon[51512]: pgmap v1729: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:48.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:47 vm09.local ceph-mon[53367]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:48.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:47 vm05.local ceph-mon[58955]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:48.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:47 vm05.local ceph-mon[51512]: pgmap v1730: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:11:50.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:49 vm09.local ceph-mon[53367]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:50.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:49 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:11:50.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:49 vm05.local ceph-mon[58955]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:50.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:49 vm05.local ceph-mon[51512]: pgmap v1731: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:11:50.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:49 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:49] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:11:51.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:50 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:51.233 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:50 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:51.233 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:50 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:11:52.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:51 vm05.local ceph-mon[58955]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:52.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:51 vm05.local ceph-mon[51512]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:52.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:51 vm09.local ceph-mon[53367]: pgmap v1732: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:54.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:53 vm05.local ceph-mon[58955]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:54.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:53 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:53 vm05.local ceph-mon[51512]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:54.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:53 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:53 vm09.local ceph-mon[53367]: pgmap v1733: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:54.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:53 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:11:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[58955]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:11:56.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[58955]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[51512]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:11:56.332 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:55 vm05.local ceph-mon[51512]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:55 vm09.local ceph-mon[53367]: pgmap v1734: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:11:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:11:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:11:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:55 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:11:56.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:55 vm09.local ceph-mon[53367]: from='mgr.14712 ' entity='mgr.y' 2026-03-10T14:11:58.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:11:57 vm05.local ceph-mon[58955]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:58.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:11:57 vm05.local ceph-mon[51512]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:11:58.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:11:57 vm09.local ceph-mon[53367]: pgmap v1735: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:00.080 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:11:59 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:12:00.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:00 vm05.local ceph-mon[58955]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:00.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:00 vm05.local ceph-mon[51512]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:00.332 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:11:59 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:11:59] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:12:00.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:00 vm09.local ceph-mon[53367]: pgmap v1736: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:01.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:01 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:01.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:01 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:01.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:01 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:02.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:02 vm09.local ceph-mon[53367]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:02.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:02 vm05.local ceph-mon[58955]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:02.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:02 vm05.local ceph-mon[51512]: pgmap v1737: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:03.003 INFO:tasks.workunit.client.0.vm05.stderr:+ rados -p 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e put threemore /etc/passwd 2026-03-10T14:12:03.033 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_bytes 0 2026-03-10T14:12:03.091 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.090+0000 7f52636db640 1 -- 192.168.123.105:0/4100681641 >> v1:192.168.123.105:6790/0 conn(0x7f525c10a6e0 legacy=0x7f525c10aac0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:03.091 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.090+0000 7f52636db640 1 -- 192.168.123.105:0/4100681641 shutdown_connections 2026-03-10T14:12:03.091 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.090+0000 7f52636db640 1 -- 192.168.123.105:0/4100681641 >> 192.168.123.105:0/4100681641 conn(0x7f525c06db70 msgr2=0x7f525c06df80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:03.091 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.090+0000 7f52636db640 1 -- 192.168.123.105:0/4100681641 shutdown_connections 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.091+0000 7f52636db640 1 -- 192.168.123.105:0/4100681641 wait complete. 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.091+0000 7f52636db640 1 Processor -- start 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.091+0000 7f52636db640 1 -- start start 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f52636db640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f525c1b8740 con 0x7f525c112630 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f52636db640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f525c1b9940 con 0x7f525c116140 2026-03-10T14:12:03.092 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f52636db640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f525c1bab40 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f5261450640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f525c10a6e0 0x7f525c10e9f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:52276/0 (socket says 192.168.123.105:52276) 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f5261450640 1 -- 192.168.123.105:0/3268942558 learned_addr learned my addr 192.168.123.105:0/3268942558 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 39634158 0 0) 0x7f525c1bab40 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5234003620 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.092+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1920154577 0 0) 0x7f5234003620 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f525c1bab40 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f524c003200 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3701891609 0 0) 0x7f525c1b9940 con 0x7f525c116140 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f5234003620 con 0x7f525c116140 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 910741411 0 0) 0x7f525c1bab40 con 0x7f525c10a6e0 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 >> v1:192.168.123.109:6789/0 conn(0x7f525c116140 legacy=0x7f525c1b6e40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 >> v1:192.168.123.105:6789/0 conn(0x7f525c112630 legacy=0x7f525c10f100 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:03.093 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f525c1bbd40 con 0x7f525c10a6e0 2026-03-10T14:12:03.094 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f525c1b9b70 con 0x7f525c10a6e0 2026-03-10T14:12:03.094 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.093+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f525c1ba1b0 con 0x7f525c10a6e0 2026-03-10T14:12:03.095 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.094+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f524c003420 con 0x7f525c10a6e0 2026-03-10T14:12:03.095 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.094+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f524c004fe0 con 0x7f525c10a6e0 2026-03-10T14:12:03.095 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.095+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f524c005260 con 0x7f525c10a6e0 2026-03-10T14:12:03.096 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.095+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(776..776 src has 254..776) ==== 8166+0+0 (unknown 1348483041 0 0) 0x7f524c096930 con 0x7f525c10a6e0 2026-03-10T14:12:03.096 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.095+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=777}) -- 0x7f525c1bab40 con 0x7f525c10a6e0 2026-03-10T14:12:03.097 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.096+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5224005180 con 0x7f525c10a6e0 2026-03-10T14:12:03.102 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.102+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f524c062df0 con 0x7f525c10a6e0 2026-03-10T14:12:03.204 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:03.204+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"} v 0) -- 0x7f5224005470 con 0x7f525c10a6e0 2026-03-10T14:12:04.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:04.107+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(777..777 src has 254..777) ==== 628+0+0 (unknown 2524612509 0 0) 0x7f524c05ad50 con 0x7f525c10a6e0 2026-03-10T14:12:04.108 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:04.107+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=778}) -- 0x7f525c1b8740 con 0x7f525c10a6e0 2026-03-10T14:12:04.114 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:04.113+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v777) ==== 217+0+0 (unknown 2460923556 0 0) 0x7f524c067d30 con 0x7f525c10a6e0 2026-03-10T14:12:04.172 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:04.171+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"} v 0) -- 0x7f52240028a0 con 0x7f525c10a6e0 2026-03-10T14:12:04.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:04 vm09.local ceph-mon[53367]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:04.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:04 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:04.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:04 vm09.local ceph-mon[53367]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:04.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[58955]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:04.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[58955]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[51512]: pgmap v1738: 188 pgs: 188 active+clean; 490 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:04.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:04 vm05.local ceph-mon[51512]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.141 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.141+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 12 ==== osd_map(778..778 src has 254..778) ==== 628+0+0 (unknown 2155307523 0 0) 0x7f524c094570 con 0x7f525c10a6e0 2026-03-10T14:12:05.141 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.141+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=779}) -- 0x7f5234003620 con 0x7f525c10a6e0 2026-03-10T14:12:05.152 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.151+0000 7f524a7fc640 1 -- 192.168.123.105:0/3268942558 <== mon.2 v1:192.168.123.105:6790/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v778) ==== 217+0+0 (unknown 3836728839 0 0) 0x7f524c05a450 con 0x7f525c10a6e0 2026-03-10T14:12:05.152 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_bytes = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 >> v1:192.168.123.105:6800/1010796596 conn(0x7f5234078310 legacy=0x7f523407a7d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 >> v1:192.168.123.105:6790/0 conn(0x7f525c10a6e0 legacy=0x7f525c10e9f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 shutdown_connections 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 >> 192.168.123.105:0/3268942558 conn(0x7f525c06db70 msgr2=0x7f525c117c80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 shutdown_connections 2026-03-10T14:12:05.154 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.154+0000 7f52636db640 1 -- 192.168.123.105:0/3268942558 wait complete. 2026-03-10T14:12:05.162 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool set-quota 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e max_objects 0 2026-03-10T14:12:05.218 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.217+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/2868111663 >> v1:192.168.123.109:6789/0 conn(0x7fbd0410dc80 legacy=0x7fbd0410e110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:05.218 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.217+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/2868111663 shutdown_connections 2026-03-10T14:12:05.218 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.217+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/2868111663 >> 192.168.123.105:0/2868111663 conn(0x7fbd0406d6d0 msgr2=0x7fbd0406dae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:05.218 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/2868111663 shutdown_connections 2026-03-10T14:12:05.218 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/2868111663 wait complete. 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 Processor -- start 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- start start 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbd041b8830 con 0x7fbd04074220 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbd041b9a30 con 0x7fbd0410dc80 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.218+0000 7fbd0c7cc640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7fbd041bac30 con 0x7fbd0411e790 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd09d40640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7fbd0410dc80 0x7fbd041b35a0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:59064/0 (socket says 192.168.123.105:59064) 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd09d40640 1 -- 192.168.123.105:0/3061652126 learned_addr learned my addr 192.168.123.105:0/3061652126 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 672052907 0 0) 0x7fbd041b9a30 con 0x7fbd0410dc80 2026-03-10T14:12:05.219 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbce0003620 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1406752293 0 0) 0x7fbce0003620 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7fbd041b9a30 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fbcf4003580 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 191704773 0 0) 0x7fbd041bac30 con 0x7fbd0411e790 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbce0003620 con 0x7fbd0411e790 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3069577589 0 0) 0x7fbd041b8830 con 0x7fbd04074220 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.219+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fbd041bac30 con 0x7fbd04074220 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2111513136 0 0) 0x7fbd041b9a30 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 >> v1:192.168.123.105:6790/0 conn(0x7fbd0411e790 legacy=0x7fbd041b6f30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 >> v1:192.168.123.105:6789/0 conn(0x7fbd04074220 legacy=0x7fbd0411e030 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbd041bbe30 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({mgrmap=0+}) -- 0x7fbd041bae60 con 0x7fbd0410dc80 2026-03-10T14:12:05.220 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=0}) -- 0x7fbd041bb450 con 0x7fbd0410dc80 2026-03-10T14:12:05.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7fbcf4003810 con 0x7fbd0410dc80 2026-03-10T14:12:05.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.220+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7fbcf4005040 con 0x7fbd0410dc80 2026-03-10T14:12:05.221 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.221+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7fbcf40052c0 con 0x7fbd0410dc80 2026-03-10T14:12:05.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.221+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbccc005180 con 0x7fbd0410dc80 2026-03-10T14:12:05.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.222+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 8 ==== osd_map(778..778 src has 254..778) ==== 8166+0+0 (unknown 489619312 0 0) 0x7fbcf4095b10 con 0x7fbd0410dc80 2026-03-10T14:12:05.222 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.222+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=779}) -- 0x7fbd041b9a30 con 0x7fbd0410dc80 2026-03-10T14:12:05.225 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.224+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7fbcf4062d90 con 0x7fbd0410dc80 2026-03-10T14:12:05.322 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:05.321+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"} v 0) -- 0x7fbccc005470 con 0x7fbd0410dc80 2026-03-10T14:12:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:05 vm09.local ceph-mon[53367]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:05 vm09.local ceph-mon[53367]: osdmap e777: 8 total, 8 up, 8 in 2026-03-10T14:12:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:05 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:05 vm09.local ceph-mon[53367]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[51512]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[51512]: osdmap e777: 8 total, 8 up, 8 in 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[51512]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[58955]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:05.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[58955]: osdmap e777: 8 total, 8 up, 8 in 2026-03-10T14:12:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3268942558' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:05.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:05 vm05.local ceph-mon[58955]: from='client.50300 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T14:12:06.148 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:06.147+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 10 ==== osd_map(779..779 src has 254..779) ==== 628+0+0 (unknown 3931194734 0 0) 0x7fbcf405acf0 con 0x7fbd0410dc80 2026-03-10T14:12:06.148 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:06.147+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=780}) -- 0x7fbd041bac30 con 0x7fbd0410dc80 2026-03-10T14:12:06.151 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:06.151+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 11 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v779) ==== 221+0+0 (unknown 3571934843 0 0) 0x7fbcf4067cd0 con 0x7fbd0410dc80 2026-03-10T14:12:06.208 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:06.207+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_command({"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"} v 0) -- 0x7fbccc0028a0 con 0x7fbd0410dc80 2026-03-10T14:12:06.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:06 vm09.local ceph-mon[53367]: pgmap v1740: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:12:06.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:06 vm09.local ceph-mon[53367]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:06.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:06 vm09.local ceph-mon[53367]: osdmap e778: 8 total, 8 up, 8 in 2026-03-10T14:12:06.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:06 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:06.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:06 vm09.local ceph-mon[53367]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:06.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[51512]: pgmap v1740: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[51512]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[51512]: osdmap e778: 8 total, 8 up, 8 in 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[51512]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[58955]: pgmap v1740: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 204 B/s wr, 1 op/s 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[58955]: from='client.50300 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[58955]: osdmap e778: 8 total, 8 up, 8 in 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:06.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:06 vm05.local ceph-mon[58955]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.163 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.162+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 12 ==== osd_map(780..780 src has 254..780) ==== 628+0+0 (unknown 1063575627 0 0) 0x7fbcf4002aa0 con 0x7fbd0410dc80 2026-03-10T14:12:07.163 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.162+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 --> v1:192.168.123.109:6789/0 -- mon_subscribe({osdmap=781}) -- 0x7fbce0003620 con 0x7fbd0410dc80 2026-03-10T14:12:07.172 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.171+0000 7fbd037fe640 1 -- 192.168.123.105:0/3061652126 <== mon.1 v1:192.168.123.109:6789/0 13 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e v780) ==== 221+0+0 (unknown 1533135055 0 0) 0x7fbcf4093750 con 0x7fbd0410dc80 2026-03-10T14:12:07.172 INFO:tasks.workunit.client.0.vm05.stderr:set-quota max_objects = 0 for pool 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 2026-03-10T14:12:07.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 >> v1:192.168.123.105:6800/1010796596 conn(0x7fbce0077e30 legacy=0x7fbce007a2f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:07.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 >> v1:192.168.123.109:6789/0 conn(0x7fbd0410dc80 legacy=0x7fbd041b35a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:07.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 shutdown_connections 2026-03-10T14:12:07.174 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 >> 192.168.123.105:0/3061652126 conn(0x7fbd0406d6d0 msgr2=0x7fbd0410fc40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:07.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 shutdown_connections 2026-03-10T14:12:07.175 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:07.174+0000 7fbd0c7cc640 1 -- 192.168.123.105:0/3061652126 wait complete. 2026-03-10T14:12:07.182 INFO:tasks.workunit.client.0.vm05.stderr:+ sleep 30 2026-03-10T14:12:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:07 vm09.local ceph-mon[53367]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:07 vm09.local ceph-mon[53367]: osdmap e779: 8 total, 8 up, 8 in 2026-03-10T14:12:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:07 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.424 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:07 vm09.local ceph-mon[53367]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[51512]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[51512]: osdmap e779: 8 total, 8 up, 8 in 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[51512]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[58955]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[58955]: osdmap e779: 8 total, 8 up, 8 in 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/3061652126' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:07.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:07 vm05.local ceph-mon[58955]: from='client.50806 ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T14:12:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:08 vm09.local ceph-mon[53367]: pgmap v1743: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-10T14:12:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:08 vm09.local ceph-mon[53367]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:08.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:08 vm09.local ceph-mon[53367]: osdmap e780: 8 total, 8 up, 8 in 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[58955]: pgmap v1743: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[58955]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[58955]: osdmap e780: 8 total, 8 up, 8 in 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[51512]: pgmap v1743: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 341 B/s wr, 0 op/s 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[51512]: from='client.50806 ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "field": "max_objects", "val": "0"}]': finished 2026-03-10T14:12:08.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:08 vm05.local ceph-mon[51512]: osdmap e780: 8 total, 8 up, 8 in 2026-03-10T14:12:09.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:09 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:09.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:09 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:09.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:09 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:10.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:12:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:12:10.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:10 vm05.local ceph-mon[51512]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:12:10.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:10 vm05.local ceph-mon[58955]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:12:10.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:09 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:12:09] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:12:10.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:10 vm09.local ceph-mon[53367]: pgmap v1745: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T14:12:11.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:11 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:11.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:11 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:11.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:11 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:12.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:12 vm05.local ceph-mon[51512]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:12.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:12 vm05.local ceph-mon[58955]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:12.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:12 vm09.local ceph-mon[53367]: pgmap v1746: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:13.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:13 vm05.local ceph-mon[51512]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 697 B/s rd, 0 op/s 2026-03-10T14:12:13.581 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:13 vm05.local ceph-mon[58955]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 697 B/s rd, 0 op/s 2026-03-10T14:12:13.673 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:13 vm09.local ceph-mon[53367]: pgmap v1747: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 697 B/s rd, 0 op/s 2026-03-10T14:12:15.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:15 vm05.local ceph-mon[51512]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:15.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:15 vm05.local ceph-mon[58955]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:15.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:15 vm09.local ceph-mon[53367]: pgmap v1748: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:17.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:17 vm05.local ceph-mon[58955]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:17.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:17 vm05.local ceph-mon[51512]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:17.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:17 vm09.local ceph-mon[53367]: pgmap v1749: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:19.825 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:19 vm09.local ceph-mon[53367]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:12:19.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:19 vm05.local ceph-mon[58955]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:12:19.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:19 vm05.local ceph-mon[51512]: pgmap v1750: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T14:12:20.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:12:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:12:20.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:19 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:12:19] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:12:20.831 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:20 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:20.831 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:20 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:20.923 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:20 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:22.081 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:21 vm05.local ceph-mon[58955]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:22.082 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:21 vm05.local ceph-mon[51512]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:22.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:21 vm09.local ceph-mon[53367]: pgmap v1751: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:24.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:23 vm09.local ceph-mon[53367]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:24.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:23 vm05.local ceph-mon[58955]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:24.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:23 vm05.local ceph-mon[51512]: pgmap v1752: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:25.174 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:24 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:25.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:24 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:25.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:24 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:26.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:25 vm09.local ceph-mon[53367]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:26.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:25 vm05.local ceph-mon[58955]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:26.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:25 vm05.local ceph-mon[51512]: pgmap v1753: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:28.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:27 vm09.local ceph-mon[53367]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:28.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:27 vm05.local ceph-mon[58955]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:28.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:27 vm05.local ceph-mon[51512]: pgmap v1754: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:30.173 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:29 vm09.local ceph-mon[53367]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:30.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:12:29 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:12:30.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:29 vm05.local ceph-mon[58955]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:30.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:29 vm05.local ceph-mon[51512]: pgmap v1755: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:30.331 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:29 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:12:29] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:12:31.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:30 vm05.local ceph-mon[58955]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:31.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:30 vm05.local ceph-mon[51512]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:31.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:30 vm09.local ceph-mon[53367]: from='client.24484 v1:192.168.123.109:0/1993315411' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T14:12:32.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:32 vm05.local ceph-mon[58955]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:32.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:31 vm05.local ceph-mon[51512]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:32.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:32 vm09.local ceph-mon[53367]: pgmap v1756: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:34.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:34 vm05.local ceph-mon[58955]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:34.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:34 vm05.local ceph-mon[51512]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:34.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:34 vm09.local ceph-mon[53367]: pgmap v1757: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:36.331 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:36 vm05.local ceph-mon[58955]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:36.331 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:36 vm05.local ceph-mon[51512]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:36.423 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:36 vm09.local ceph-mon[53367]: pgmap v1758: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T14:12:37.184 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool delete 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e 29a3ecc7-28e3-45f6-a8f8-5780a9b8288e --yes-i-really-really-mean-it 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- 192.168.123.105:0/1054799747 >> v1:192.168.123.105:6790/0 conn(0x7f780010d7a0 legacy=0x7f780010fb90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- 192.168.123.105:0/1054799747 shutdown_connections 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- 192.168.123.105:0/1054799747 >> 192.168.123.105:0/1054799747 conn(0x7f78001005f0 msgr2=0x7f7800102a10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- 192.168.123.105:0/1054799747 shutdown_connections 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- 192.168.123.105:0/1054799747 wait complete. 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 Processor -- start 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.236+0000 7f78056f1640 1 -- start start 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f78056f1640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f7800110fc0 con 0x7f780010a900 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f78056f1640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f78001acda0 con 0x7f780010d7a0 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f78056f1640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f78001adf80 con 0x7f7800111370 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77ff7fe640 1 --1- >> v1:192.168.123.105:6790/0 conn(0x7f7800111370 0x7f78001aa670 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.105:6790/0 says I am v1:192.168.123.105:43746/0 (socket says 192.168.123.105:43746) 2026-03-10T14:12:37.237 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77ff7fe640 1 -- 192.168.123.105:0/2945308172 learned_addr learned my addr 192.168.123.105:0/2945308172 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2497136981 0 0) 0x7f78001adf80 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f77d4003620 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 1292549233 0 0) 0x7f77d4003620 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f78001adf80 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f77f00034c0 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2452284454 0 0) 0x7f78001adf80 con 0x7f7800111370 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 >> v1:192.168.123.109:6789/0 conn(0x7f780010d7a0 legacy=0x7f780010e620 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.237+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 >> v1:192.168.123.105:6789/0 conn(0x7f780010a900 legacy=0x7f780010df10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T14:12:37.238 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.238+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f78001af160 con 0x7f7800111370 2026-03-10T14:12:37.239 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.238+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f77f0003760 con 0x7f7800111370 2026-03-10T14:12:37.239 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.238+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f78001acf70 con 0x7f7800111370 2026-03-10T14:12:37.239 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.238+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f78001ad4d0 con 0x7f7800111370 2026-03-10T14:12:37.240 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.239+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f77f0003ee0 con 0x7f7800111370 2026-03-10T14:12:37.240 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.239+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f77c4005180 con 0x7f7800111370 2026-03-10T14:12:37.243 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.242+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f77f0004420 con 0x7f7800111370 2026-03-10T14:12:37.243 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.243+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(780..780 src has 254..780) ==== 8166+0+0 (unknown 3405620931 0 0) 0x7f77f0095e40 con 0x7f7800111370 2026-03-10T14:12:37.243 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.243+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=781}) -- 0x7f78001adf80 con 0x7f7800111370 2026-03-10T14:12:37.243 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.243+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f77f00962e0 con 0x7f7800111370 2026-03-10T14:12:37.343 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:37.342+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true} v 0) -- 0x7f77c4005470 con 0x7f7800111370 2026-03-10T14:12:38.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.155+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(781..781 src has 254..781) ==== 296+0+0 (unknown 3984984606 0 0) 0x7f77f005a260 con 0x7f7800111370 2026-03-10T14:12:38.156 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.155+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=782}) -- 0x7f78001acda0 con 0x7f7800111370 2026-03-10T14:12:38.177 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.176+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]=0 pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' removed v781) ==== 248+0+0 (unknown 1968294305 0 0) 0x7f77f0062300 con 0x7f7800111370 2026-03-10T14:12:38.232 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.231+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true} v 0) -- 0x7f77c4005c80 con 0x7f7800111370 2026-03-10T14:12:38.233 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.233+0000 7f77dffff640 1 -- 192.168.123.105:0/2945308172 <== mon.2 v1:192.168.123.105:6790/0 12 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]=0 pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' does not exist v781) ==== 255+0+0 (unknown 4057922634 0 0) 0x7f77f0093a80 con 0x7f7800111370 2026-03-10T14:12:38.233 INFO:tasks.workunit.client.0.vm05.stderr:pool '29a3ecc7-28e3-45f6-a8f8-5780a9b8288e' does not exist 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.235+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 >> v1:192.168.123.105:6800/1010796596 conn(0x7f77d4078200 legacy=0x7f77d407a6c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.235+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 >> v1:192.168.123.105:6790/0 conn(0x7f7800111370 legacy=0x7f78001aa670 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.235+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 shutdown_connections 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.235+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 >> 192.168.123.105:0/2945308172 conn(0x7f78001005f0 msgr2=0x7f7800111da0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.236+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 shutdown_connections 2026-03-10T14:12:38.236 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.236+0000 7f78056f1640 1 -- 192.168.123.105:0/2945308172 wait complete. 2026-03-10T14:12:38.244 INFO:tasks.workunit.client.0.vm05.stderr:+ ceph osd pool delete ffbc96f0-d53c-4a94-9954-47f277c886bf ffbc96f0-d53c-4a94-9954-47f277c886bf --yes-i-really-really-mean-it 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.298+0000 7f6a57933640 1 -- 192.168.123.105:0/1326530069 >> v1:192.168.123.105:6789/0 conn(0x7f6a5010f0f0 legacy=0x7f6a50111590 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.298+0000 7f6a57933640 1 -- 192.168.123.105:0/1326530069 shutdown_connections 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.298+0000 7f6a57933640 1 -- 192.168.123.105:0/1326530069 >> 192.168.123.105:0/1326530069 conn(0x7f6a500fe3b0 msgr2=0x7f6a501007d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.298+0000 7f6a57933640 1 -- 192.168.123.105:0/1326530069 shutdown_connections 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.298+0000 7f6a57933640 1 -- 192.168.123.105:0/1326530069 wait complete. 2026-03-10T14:12:38.299 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a57933640 1 Processor -- start 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a57933640 1 -- start start 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a57933640 1 -- --> v1:192.168.123.105:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a5010ed50 con 0x7f6a50108680 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a57933640 1 -- --> v1:192.168.123.109:6789/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a501aab20 con 0x7f6a5010f0f0 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a57933640 1 -- --> v1:192.168.123.105:6790/0 -- auth(proto 0 34 bytes epoch 0) -- 0x7f6a501abd00 con 0x7f6a5010b520 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a55ea9640 1 --1- >> v1:192.168.123.109:6789/0 conn(0x7f6a5010f0f0 0x7f6a501a83f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=1).handle_server_banner_and_identify peer v1:192.168.123.109:6789/0 says I am v1:192.168.123.105:49656/0 (socket says 192.168.123.105:49656) 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a55ea9640 1 -- 192.168.123.105:0/1882239610 learned_addr learned my addr 192.168.123.105:0/1882239610 (peer_addr_for_me v1:192.168.123.105:0/0) 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1625058215 0 0) 0x7f6a501abd00 con 0x7f6a5010b520 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.299+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a2c003620 con 0x7f6a5010b520 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.1 v1:192.168.123.109:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3844861294 0 0) 0x7f6a501aab20 con 0x7f6a5010f0f0 2026-03-10T14:12:38.300 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.109:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a501abd00 con 0x7f6a5010f0f0 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.0 v1:192.168.123.105:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3246008372 0 0) 0x7f6a5010ed50 con 0x7f6a50108680 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6a501aab20 con 0x7f6a50108680 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 2993630549 0 0) 0x7f6a2c003620 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6a5010ed50 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6a400030d0 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.1 v1:192.168.123.109:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 3876087549 0 0) 0x7f6a501abd00 con 0x7f6a5010f0f0 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.109:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6a2c003620 con 0x7f6a5010f0f0 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.0 v1:192.168.123.105:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 764+0+0 (unknown 469011097 0 0) 0x7f6a501aab20 con 0x7f6a50108680 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6789/0 -- auth(proto 2 165 bytes epoch 0) -- 0x7f6a501abd00 con 0x7f6a50108680 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.0 v1:192.168.123.105:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6a44002fe0 con 0x7f6a50108680 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.1 v1:192.168.123.109:6789/0 3 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6a4c003580 con 0x7f6a5010f0f0 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 580+0+0 (unknown 2906379969 0 0) 0x7f6a5010ed50 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 >> v1:192.168.123.109:6789/0 conn(0x7f6a5010f0f0 legacy=0x7f6a501a83f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 >> v1:192.168.123.105:6789/0 conn(0x7f6a50108680 legacy=0x7f6a5010d580 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6a501acee0 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 5 ==== config(24 keys) ==== 1004+0+0 (unknown 1815091123 0 0) 0x7f6a400036c0 con 0x7f6a5010b520 2026-03-10T14:12:38.301 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_subscribe({mgrmap=0+}) -- 0x7f6a501abed0 con 0x7f6a5010b520 2026-03-10T14:12:38.302 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 6 ==== mon_map magic: 0 ==== 308+0+0 (unknown 523666120 0 0) 0x7f6a40004af0 con 0x7f6a5010b520 2026-03-10T14:12:38.303 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.300+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=0}) -- 0x7f6a501ac440 con 0x7f6a5010b520 2026-03-10T14:12:38.305 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.302+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 7 ==== mgrmap(e 22) ==== 100060+0+0 (unknown 3169204467 0 0) 0x7f6a4001d1c0 con 0x7f6a5010b520 2026-03-10T14:12:38.305 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.302+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6a18005180 con 0x7f6a5010b520 2026-03-10T14:12:38.311 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.305+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 8 ==== osd_map(781..781 src has 254..781) ==== 7778+0+0 (unknown 4030761925 0 0) 0x7f6a40095250 con 0x7f6a5010b520 2026-03-10T14:12:38.311 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.305+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=782}) -- 0x7f6a5010ed50 con 0x7f6a5010b520 2026-03-10T14:12:38.311 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.311+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 9 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (unknown 1092875540 0 2568732696) 0x7f6a40061890 con 0x7f6a5010b520 2026-03-10T14:12:38.411 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:38.410+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6a18005470 con 0x7f6a5010b520 2026-03-10T14:12:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[51512]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:38.581 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:38.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[51512]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:38.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[58955]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:38.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:38.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:38 vm05.local ceph-mon[58955]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:38 vm09.local ceph-mon[53367]: pgmap v1759: 188 pgs: 188 active+clean; 492 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T14:12:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:38 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:38.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:38 vm09.local ceph-mon[53367]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.191 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.190+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 10 ==== osd_map(782..782 src has 254..782) ==== 296+0+0 (unknown 3274751777 0 0) 0x7f6a400597f0 con 0x7f6a5010b520 2026-03-10T14:12:39.191 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.190+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_subscribe({osdmap=783}) -- 0x7f6a501abd00 con 0x7f6a5010b520 2026-03-10T14:12:39.208 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.207+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 11 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]=0 pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' removed v782) ==== 248+0+0 (unknown 282179252 0 0) 0x7f6a40058b70 con 0x7f6a5010b520 2026-03-10T14:12:39.267 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.266+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 --> v1:192.168.123.105:6790/0 -- mon_command({"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true} v 0) -- 0x7f6a18005d40 con 0x7f6a5010b520 2026-03-10T14:12:39.268 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.268+0000 7f6a3e7fc640 1 -- 192.168.123.105:0/1882239610 <== mon.2 v1:192.168.123.105:6790/0 12 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]=0 pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' does not exist v782) ==== 255+0+0 (unknown 2517019178 0 0) 0x7f6a40059490 con 0x7f6a5010b520 2026-03-10T14:12:39.268 INFO:tasks.workunit.client.0.vm05.stderr:pool 'ffbc96f0-d53c-4a94-9954-47f277c886bf' does not exist 2026-03-10T14:12:39.271 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 >> v1:192.168.123.105:6800/1010796596 conn(0x7f6a2c07ca10 legacy=0x7f6a2c07eed0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:39.271 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 >> v1:192.168.123.105:6790/0 conn(0x7f6a5010b520 legacy=0x7f6a5010dc90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T14:12:39.271 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 shutdown_connections 2026-03-10T14:12:39.271 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 >> 192.168.123.105:0/1882239610 conn(0x7f6a500fe3b0 msgr2=0x7f6a5010aac0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T14:12:39.272 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 shutdown_connections 2026-03-10T14:12:39.272 INFO:tasks.workunit.client.0.vm05.stderr:2026-03-10T14:12:39.271+0000 7f6a57933640 1 -- 192.168.123.105:0/1882239610 wait complete. 2026-03-10T14:12:39.279 INFO:tasks.workunit.client.0.vm05.stdout:OK 2026-03-10T14:12:39.279 INFO:tasks.workunit.client.0.vm05.stderr:+ echo OK 2026-03-10T14:12:39.280 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T14:12:39.280 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T14:12:39.315 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-10T14:12:39.315 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='client.50312 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: osdmap e781: 8 total, 8 up, 8 in 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[51512]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='client.50312 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: osdmap e781: 8 total, 8 up, 8 in 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.582 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-mon[58955]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='client.50312 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: osdmap e781: 8 total, 8 up, 8 in 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/2945308172' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='client.50312 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "pool2": "29a3ecc7-28e3-45f6-a8f8-5780a9b8288e", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:39.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-mon[53367]: from='mgr.14712 v1:192.168.123.105:0/3994685623' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:12:39.739 DEBUG:teuthology.parallel:result is None 2026-03-10T14:12:39.739 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:12:39.766 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:12:39.766 DEBUG:teuthology.orchestra.run.vm05:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T14:12:39.823 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T14:12:39.823 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T14:12:39.840 INFO:tasks.cephadm:Teardown begin 2026-03-10T14:12:39.840 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:12:39.887 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:12:39.947 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T14:12:39.947 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 -- ceph mgr module disable cephadm 2026-03-10T14:12:40.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:39 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: ::ffff:192.168.123.109 - - [10/Mar/2026:14:12:39] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T14:12:40.122 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/mon.a/config 2026-03-10T14:12:40.141 INFO:teuthology.orchestra.run.vm05.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T14:12:40.164 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T14:12:40.164 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T14:12:40.164 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:12:40.174 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:12:39 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug there is no tcmu-runner data available 2026-03-10T14:12:40.180 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:12:40.196 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T14:12:40.196 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T14:12:40.196 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: from='client.50315 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: osdmap e782: 8 total, 8 up, 8 in 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.474 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[58955]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: from='client.50315 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: osdmap e782: 8 total, 8 up, 8 in 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-mon[51512]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local systemd[1]: Stopping Ceph mon.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a[51487]: 2026-03-10T14:12:40.336+0000 7fe52b4c6640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:40.475 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a[51487]: 2026-03-10T14:12:40.336+0000 7fe52b4c6640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: pgmap v1761: 176 pgs: 176 active+clean; 470 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: from='client.50315 ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: osdmap e782: 8 total, 8 up, 8 in 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: from='client.? v1:192.168.123.105:0/1882239610' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.674 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:40 vm09.local ceph-mon[53367]: from='client.50315 ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "pool2": "ffbc96f0-d53c-4a94-9954-47f277c886bf", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T14:12:40.739 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local podman[167266]: 2026-03-10 14:12:40.474550481 +0000 UTC m=+0.152584537 container stop 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-10T14:12:40.739 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local podman[167266]: 2026-03-10 14:12:40.504316893 +0000 UTC m=+0.182350949 container died 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223) 2026-03-10T14:12:40.739 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local podman[167266]: 2026-03-10 14:12:40.671542737 +0000 UTC m=+0.349576793 container remove 0cf81e75bce1552c1892a2cb7d20c1b236286d4a36cfcb8bc67d75827f5d7598 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) 2026-03-10T14:12:40.739 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 14:12:40 vm05.local bash[167266]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-a 2026-03-10T14:12:40.750 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.a.service' 2026-03-10T14:12:40.787 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:40.787 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T14:12:40.788 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T14:12:40.788 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.c 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local systemd[1]: Stopping Ceph mon.c for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c[58932]: 2026-03-10T14:12:40.950+0000 7efc19c74640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c[58932]: 2026-03-10T14:12:40.950+0000 7efc19c74640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:40] ENGINE Bus STOPPING 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:40] ENGINE Bus STOPPED 2026-03-10T14:12:41.082 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:40 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:40] ENGINE Bus STARTING 2026-03-10T14:12:41.327 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.c.service' 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:41 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:41] ENGINE Serving on http://:::9283 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:41 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y[51744]: [10/Mar/2026:14:12:41] ENGINE Bus STARTED 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local podman[167381]: 2026-03-10 14:12:41.138509295 +0000 UTC m=+0.210323291 container died fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local podman[167381]: 2026-03-10 14:12:41.261878313 +0000 UTC m=+0.333692309 container remove fb825a8a53354a45bcc414311b4020f6e6e36c7d88c8a3339968221bfe0c3da7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local bash[167381]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-c 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.c.service: Deactivated successfully. 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local systemd[1]: Stopped Ceph mon.c for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:12:41.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 14:12:41 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.c.service: Consumed 23.488s CPU time. 2026-03-10T14:12:41.358 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:41.358 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T14:12:41.358 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T14:12:41.358 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.b 2026-03-10T14:12:41.675 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:41 vm09.local systemd[1]: Stopping Ceph mon.b for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:41.675 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:41 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-b[53343]: 2026-03-10T14:12:41.469+0000 7fd5c64dd640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:41.675 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 10 14:12:41 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mon-b[53343]: 2026-03-10T14:12:41.469+0000 7fd5c64dd640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T14:12:41.929 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mon.b.service' 2026-03-10T14:12:41.967 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:41.967 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T14:12:41.967 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T14:12:41.967 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y 2026-03-10T14:12:42.249 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:42 vm05.local systemd[1]: Stopping Ceph mgr.y for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:42.249 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 14:12:42 vm05.local podman[167504]: 2026-03-10 14:12:42.135530397 +0000 UTC m=+0.076751544 container died 7467828a73d7bb28ed474d6bf6e4eaeb531688302e4dda0b176565da140a28b7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-y, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid) 2026-03-10T14:12:42.337 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.y.service' 2026-03-10T14:12:42.372 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:42.372 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T14:12:42.372 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T14:12:42.372 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.x 2026-03-10T14:12:42.654 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 14:12:42 vm09.local systemd[1]: Stopping Ceph mgr.x for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:42.654 INFO:journalctl@ceph.mgr.x.vm09.stdout:Mar 10 14:12:42 vm09.local podman[90075]: 2026-03-10 14:12:42.536001815 +0000 UTC m=+0.078898157 container died 15c4a5b90f703dc23149560a5c0b0654a9bed8a2912f7db9288e1266f1d844be (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-mgr-x, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default) 2026-03-10T14:12:42.757 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@mgr.x.service' 2026-03-10T14:12:42.791 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:42.791 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T14:12:42.791 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T14:12:42.791 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.0 2026-03-10T14:12:43.082 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:42 vm05.local systemd[1]: Stopping Ceph osd.0 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:43.082 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:43 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T14:12:43.005+0000 7fea32257640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:43.082 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:43 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T14:12:43.005+0000 7fea32257640 -1 osd.0 782 *** Got signal Terminated *** 2026-03-10T14:12:43.082 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:43 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0[62706]: 2026-03-10T14:12:43.005+0000 7fea32257640 -1 osd.0 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:12:48.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167616]: 2026-03-10 14:12:48.056511822 +0000 UTC m=+5.164675113 container died b9c61c2f9ada571ba161ff4d7b6c59806739662d35bbfb91016b1a0e97fe0d3a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T14:12:48.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167616]: 2026-03-10 14:12:48.187928093 +0000 UTC m=+5.296091375 container remove b9c61c2f9ada571ba161ff4d7b6c59806739662d35bbfb91016b1a0e97fe0d3a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True) 2026-03-10T14:12:48.332 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local bash[167616]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.347341835 +0000 UTC m=+0.021386004 container create 89249f2e02f88f3e97064e236025bbe0aaaeebf42900336afbaaa48e3e209b46 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS) 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.401687526 +0000 UTC m=+0.075731705 container init 89249f2e02f88f3e97064e236025bbe0aaaeebf42900336afbaaa48e3e209b46 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.406785478 +0000 UTC m=+0.080829647 container start 89249f2e02f88f3e97064e236025bbe0aaaeebf42900336afbaaa48e3e209b46 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, OSD_FLAVOR=default) 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.410323872 +0000 UTC m=+0.084368041 container attach 89249f2e02f88f3e97064e236025bbe0aaaeebf42900336afbaaa48e3e209b46 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.338839859 +0000 UTC m=+0.012884038 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:12:48.677 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 14:12:48 vm05.local podman[167689]: 2026-03-10 14:12:48.557510976 +0000 UTC m=+0.231555154 container died 89249f2e02f88f3e97064e236025bbe0aaaeebf42900336afbaaa48e3e209b46 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-0-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:12:48.694 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.0.service' 2026-03-10T14:12:48.728 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:48.728 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T14:12:48.728 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T14:12:48.728 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.1 2026-03-10T14:12:49.081 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:48 vm05.local systemd[1]: Stopping Ceph osd.1 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:49.082 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:48 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T14:12:48.877+0000 7f83e5d72640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:49.082 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:48 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T14:12:48.877+0000 7f83e5d72640 -1 osd.1 782 *** Got signal Terminated *** 2026-03-10T14:12:49.082 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:48 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1[68059]: 2026-03-10T14:12:48.877+0000 7f83e5d72640 -1 osd.1 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:12:54.017 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:53 vm05.local podman[167808]: 2026-03-10 14:12:53.919603559 +0000 UTC m=+5.057596740 container died 6b40c4b164ea43f3ef138d2610b93482fe651b9d565bbd364edc4fe9fe4299c7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T14:12:54.282 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167808]: 2026-03-10 14:12:54.047815923 +0000 UTC m=+5.185809104 container remove 6b40c4b164ea43f3ef138d2610b93482fe651b9d565bbd364edc4fe9fe4299c7 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:12:54.282 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local bash[167808]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1 2026-03-10T14:12:54.282 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.258853822 +0000 UTC m=+0.019426907 container create 9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1-deactivate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.296262228 +0000 UTC m=+0.056835313 container init 9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.307367877 +0000 UTC m=+0.067940962 container start 9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1-deactivate, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.31665982 +0000 UTC m=+0.077232905 container attach 9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1-deactivate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.250910793 +0000 UTC m=+0.011483878 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local conmon[167910]: conmon 9fc26334f290bed3ee61 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66.scope/memory.events 2026-03-10T14:12:54.547 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 14:12:54 vm05.local podman[167897]: 2026-03-10 14:12:54.433047189 +0000 UTC m=+0.193620264 container died 9fc26334f290bed3ee614e6939601313ac9cf2c216eb248cbced50ff61c62f66 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-1-deactivate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:12:54.570 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.1.service' 2026-03-10T14:12:54.610 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:12:54.610 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T14:12:54.610 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T14:12:54.610 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.2 2026-03-10T14:12:54.831 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:54 vm05.local systemd[1]: Stopping Ceph osd.2 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:12:54.831 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:54 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T14:12:54.759+0000 7f6739232640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:12:54.831 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:54 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T14:12:54.759+0000 7f6739232640 -1 osd.2 782 *** Got signal Terminated *** 2026-03-10T14:12:54.832 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:54 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2[73563]: 2026-03-10T14:12:54.759+0000 7f6739232640 -1 osd.2 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:00.059 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:59 vm05.local podman[168019]: 2026-03-10 14:12:59.799041366 +0000 UTC m=+5.061442509 container died acdaf36076eeb6ed3e30e45a515b22ef50b7e3e050565864c6e35129a58d320f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:13:00.059 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:59 vm05.local podman[168019]: 2026-03-10 14:12:59.924925666 +0000 UTC m=+5.187326809 container remove acdaf36076eeb6ed3e30e45a515b22ef50b7e3e050565864c6e35129a58d320f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.build-date=20260223) 2026-03-10T14:13:00.059 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:12:59 vm05.local bash[168019]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.059770984 +0000 UTC m=+0.016258095 container create e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.10314138 +0000 UTC m=+0.059628491 container init e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2-deactivate, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.108992443 +0000 UTC m=+0.065479554 container start e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.110416238 +0000 UTC m=+0.066903349 container attach e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.052967669 +0000 UTC m=+0.009454780 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local conmon[168105]: conmon e809a2d12da0f045b6ae : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301.scope/memory.events 2026-03-10T14:13:00.332 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 14:13:00 vm05.local podman[168093]: 2026-03-10 14:13:00.246064702 +0000 UTC m=+0.202551804 container died e809a2d12da0f045b6ae5b7391f97a76a6cc0512e566b2c80c67534c6bf76301 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-2-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid) 2026-03-10T14:13:00.380 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.2.service' 2026-03-10T14:13:00.420 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:00.421 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T14:13:00.421 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T14:13:00.421 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.3 2026-03-10T14:13:00.832 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:00 vm05.local systemd[1]: Stopping Ceph osd.3 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:00.832 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:00 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3[79227]: 2026-03-10T14:13:00.569+0000 7fdfe7075640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:00.832 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:00 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3[79227]: 2026-03-10T14:13:00.569+0000 7fdfe7075640 -1 osd.3 782 *** Got signal Terminated *** 2026-03-10T14:13:00.832 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:00 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3[79227]: 2026-03-10T14:13:00.569+0000 7fdfe7075640 -1 osd.3 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:05.902 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168213]: 2026-03-10 14:13:05.615395064 +0000 UTC m=+5.060928567 container died 76510792410dbe892e1e2c4756bcd49bbc71b963d96109f9d19a4b2ee43b4e1b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:05.902 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168213]: 2026-03-10 14:13:05.755728638 +0000 UTC m=+5.201262141 container remove 76510792410dbe892e1e2c4756bcd49bbc71b963d96109f9d19a4b2ee43b4e1b (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:13:05.902 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local bash[168213]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168288]: 2026-03-10 14:13:05.901875846 +0000 UTC m=+0.020973181 container create 67edddc5411dd25d6c033ba261e50068ec45b0c77ebb6c963b761adba53cce62 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3-deactivate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168288]: 2026-03-10 14:13:05.948710096 +0000 UTC m=+0.067807431 container init 67edddc5411dd25d6c033ba261e50068ec45b0c77ebb6c963b761adba53cce62 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3-deactivate, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168288]: 2026-03-10 14:13:05.954013023 +0000 UTC m=+0.073110358 container start 67edddc5411dd25d6c033ba261e50068ec45b0c77ebb6c963b761adba53cce62 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3-deactivate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , ceph=True, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168288]: 2026-03-10 14:13:05.95512822 +0000 UTC m=+0.074225555 container attach 67edddc5411dd25d6c033ba261e50068ec45b0c77ebb6c963b761adba53cce62 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3-deactivate, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:05 vm05.local podman[168288]: 2026-03-10 14:13:05.891016468 +0000 UTC m=+0.010113803 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:06.213 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 14:13:06 vm05.local podman[168288]: 2026-03-10 14:13:06.097627519 +0000 UTC m=+0.216724854 container died 67edddc5411dd25d6c033ba261e50068ec45b0c77ebb6c963b761adba53cce62 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-3-deactivate, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T14:13:06.233 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.3.service' 2026-03-10T14:13:06.269 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:06.269 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T14:13:06.269 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T14:13:06.269 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.4 2026-03-10T14:13:06.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:06 vm09.local systemd[1]: Stopping Ceph osd.4 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:06.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:06 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T14:13:06.377+0000 7f6b27565640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:06.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:06 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T14:13:06.377+0000 7f6b27565640 -1 osd.4 782 *** Got signal Terminated *** 2026-03-10T14:13:06.674 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:06 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T14:13:06.377+0000 7f6b27565640 -1 osd.4 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:09.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:09 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:09.223+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:10.424 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:10 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T14:13:10.056+0000 7f6b2337d640 -1 osd.4 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.884101+0000 front 2026-03-10T14:12:47.884361+0000 (oldest deadline 2026-03-10T14:13:09.583931+0000) 2026-03-10T14:13:10.424 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:10 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:10.273+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:11.281 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:10 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:10.937+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:11.282 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4[57743]: 2026-03-10T14:13:11.025+0000 7f6b2337d640 -1 osd.4 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.884101+0000 front 2026-03-10T14:12:47.884361+0000 (oldest deadline 2026-03-10T14:13:09.583931+0000) 2026-03-10T14:13:11.602 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90199]: 2026-03-10 14:13:11.421532217 +0000 UTC m=+5.058405029 container died 85ac5ea92caa312a935a17adc4fce58ef930fe78bc83c4133c6aaf381bbd0f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:13:11.602 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:11 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:11.281+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90199]: 2026-03-10 14:13:11.609687639 +0000 UTC m=+5.246560460 container remove 85ac5ea92caa312a935a17adc4fce58ef930fe78bc83c4133c6aaf381bbd0f58 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local bash[90199]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.749258842 +0000 UTC m=+0.017982824 container create 6853b26ba23ec838e7a0283fab6c6c3505f5bab1a082fd884f06c201bd703cc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.782608892 +0000 UTC m=+0.051332874 container init 6853b26ba23ec838e7a0283fab6c6c3505f5bab1a082fd884f06c201bd703cc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4-deactivate, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.787519955 +0000 UTC m=+0.056243937 container start 6853b26ba23ec838e7a0283fab6c6c3505f5bab1a082fd884f06c201bd703cc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.789256827 +0000 UTC m=+0.057980809 container attach 6853b26ba23ec838e7a0283fab6c6c3505f5bab1a082fd884f06c201bd703cc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid) 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.741991377 +0000 UTC m=+0.010715359 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:11.925 INFO:journalctl@ceph.osd.4.vm09.stdout:Mar 10 14:13:11 vm09.local podman[90277]: 2026-03-10 14:13:11.924134573 +0000 UTC m=+0.192858555 container died 6853b26ba23ec838e7a0283fab6c6c3505f5bab1a082fd884f06c201bd703cc1 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-4-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3) 2026-03-10T14:13:12.053 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.4.service' 2026-03-10T14:13:12.087 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:12.087 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T14:13:12.087 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T14:13:12.087 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.5 2026-03-10T14:13:12.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:11 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:11.949+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:12.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:12 vm09.local systemd[1]: Stopping Ceph osd.5 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:12.674 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:12.248+0000 7f5b3321d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:12.674 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:12.248+0000 7f5b3321d640 -1 osd.5 782 *** Got signal Terminated *** 2026-03-10T14:13:12.674 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:12.248+0000 7f5b3321d640 -1 osd.5 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:12.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:12.312+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:13.173 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:12.997+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:13.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:12 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:12.879+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:13.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:13 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:13.362+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:14.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:13 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:13.980+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:14.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:13 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:13.833+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:14.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:14 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:14.357+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:15.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:14 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:14.838+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:15.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:14 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:14.983+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:15.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:15 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:15.333+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:16.173 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:16 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:16.029+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:16.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:15 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:15.805+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:16.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:16 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:16.350+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:16.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:16 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:16.350+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:17.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:17.020+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:44.595793+0000 front 2026-03-10T14:12:44.595786+0000 (oldest deadline 2026-03-10T14:13:10.495286+0000) 2026-03-10T14:13:17.174 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5[62939]: 2026-03-10T14:13:17.020+0000 7f5b2f035640 -1 osd.5 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:53.396068+0000 front 2026-03-10T14:12:53.396120+0000 (oldest deadline 2026-03-10T14:13:16.895732+0000) 2026-03-10T14:13:17.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:16 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:16.800+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:17.551 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90397]: 2026-03-10 14:13:17.296925272 +0000 UTC m=+5.065765788 container died cf6c1b13aefc5d1dcbdc59d59bf1321757ea720816ba5a3ed8b27667edb02a1f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-10T14:13:17.551 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90397]: 2026-03-10 14:13:17.421852512 +0000 UTC m=+5.190693028 container remove cf6c1b13aefc5d1dcbdc59d59bf1321757ea720816ba5a3ed8b27667edb02a1f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0) 2026-03-10T14:13:17.551 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local bash[90397]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5 2026-03-10T14:13:17.551 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:17.338+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:17.552 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:17.338+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.551829936 +0000 UTC m=+0.015792835 container create 298e175c5487628cec0b76a3ad40918f1812544ff73a91ca62ddd692395a638f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.588668125 +0000 UTC m=+0.052631024 container init 298e175c5487628cec0b76a3ad40918f1812544ff73a91ca62ddd692395a638f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5-deactivate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.592683673 +0000 UTC m=+0.056646572 container start 298e175c5487628cec0b76a3ad40918f1812544ff73a91ca62ddd692395a638f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5-deactivate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.594599128 +0000 UTC m=+0.058562038 container attach 298e175c5487628cec0b76a3ad40918f1812544ff73a91ca62ddd692395a638f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, io.buildah.version=1.41.3) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.545506098 +0000 UTC m=+0.009469007 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.5.vm09.stdout:Mar 10 14:13:17 vm09.local podman[90474]: 2026-03-10 14:13:17.72418517 +0000 UTC m=+0.188148069 container died 298e175c5487628cec0b76a3ad40918f1812544ff73a91ca62ddd692395a638f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-5-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:17.754+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:17.839 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:17 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:17.754+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:17.855 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.5.service' 2026-03-10T14:13:17.886 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:17.886 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T14:13:17.886 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T14:13:17.886 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.6 2026-03-10T14:13:18.174 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:17 vm09.local systemd[1]: Stopping Ceph osd.6 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:18.174 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:18.035+0000 7f4621b80640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:18.174 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:18.035+0000 7f4621b80640 -1 osd.6 782 *** Got signal Terminated *** 2026-03-10T14:13:18.174 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:18.035+0000 7f4621b80640 -1 osd.6 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:18.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:18.297+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:18.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:18.297+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:19.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:18.776+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:19.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:18 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:18.776+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:19.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:19.314+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:19.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:19.314+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:20.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:19.777+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:20.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:19 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:19.777+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:20.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:20 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:20.283+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:20.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:20 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:20.283+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:20.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:20 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:20.283+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:55.538421+0000 front 2026-03-10T14:12:55.538584+0000 (oldest deadline 2026-03-10T14:13:20.238097+0000) 2026-03-10T14:13:21.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:20 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:20.792+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:21.184 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:20 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:20.792+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:21.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:21.293+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:21.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:21.293+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:21.674 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:21.293+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:55.538421+0000 front 2026-03-10T14:12:55.538584+0000 (oldest deadline 2026-03-10T14:13:20.238097+0000) 2026-03-10T14:13:22.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:21.790+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:22.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:21 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:21.790+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:22.460 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:22.260+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.537451+0000 front 2026-03-10T14:12:47.537350+0000 (oldest deadline 2026-03-10T14:13:09.137261+0000) 2026-03-10T14:13:22.461 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:22.260+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.037982+0000 front 2026-03-10T14:12:52.038078+0000 (oldest deadline 2026-03-10T14:13:15.537813+0000) 2026-03-10T14:13:22.461 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6[68276]: 2026-03-10T14:13:22.260+0000 7f461d998640 -1 osd.6 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:55.538421+0000 front 2026-03-10T14:12:55.538584+0000 (oldest deadline 2026-03-10T14:13:20.238097+0000) 2026-03-10T14:13:23.075 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:22.785+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:23.075 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:22.785+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:23.075 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:22 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:22.785+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:23.424 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:23 vm09.local podman[90595]: 2026-03-10 14:13:23.075256941 +0000 UTC m=+5.053325631 container died fa810d125cab026a1380653a87e13a4b2e14fc7f77f92a8bb47eb251598edfc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:23 vm09.local podman[90595]: 2026-03-10 14:13:23.807550442 +0000 UTC m=+5.785619112 container remove fa810d125cab026a1380653a87e13a4b2e14fc7f77f92a8bb47eb251598edfc4 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:23 vm09.local bash[90595]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:23 vm09.local podman[90683]: 2026-03-10 14:13:23.977491917 +0000 UTC m=+0.059148725 container create 683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6-deactivate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:23 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:23.749+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:23 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:23.749+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:24.027 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:23 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:23.749+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local podman[90683]: 2026-03-10 14:13:23.930488107 +0000 UTC m=+0.012144915 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local podman[90683]: 2026-03-10 14:13:24.043112597 +0000 UTC m=+0.124769405 container init 683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6-deactivate, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local podman[90683]: 2026-03-10 14:13:24.04932681 +0000 UTC m=+0.130983618 container start 683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6-deactivate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local podman[90683]: 2026-03-10 14:13:24.050598671 +0000 UTC m=+0.132255479 container attach 683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local conmon[90694]: conmon 683ef4bcf62bc6d78946 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2.scope/memory.events 2026-03-10T14:13:24.301 INFO:journalctl@ceph.osd.6.vm09.stdout:Mar 10 14:13:24 vm09.local podman[90683]: 2026-03-10 14:13:24.176753739 +0000 UTC m=+0.258410547 container died 683ef4bcf62bc6d789466a6d8e94f2a3227221afd85c630f9450ede9109f51d2 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-6-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-10T14:13:24.320 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.6.service' 2026-03-10T14:13:24.363 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:24.363 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T14:13:24.363 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T14:13:24.363 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.7 2026-03-10T14:13:24.674 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local systemd[1]: Stopping Ceph osd.7 for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:24.674 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.547+0000 7fe134ee8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:24.674 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.547+0000 7fe134ee8640 -1 osd.7 782 *** Got signal Terminated *** 2026-03-10T14:13:24.674 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.547+0000 7fe134ee8640 -1 osd.7 782 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T14:13:25.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.759+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:25.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.759+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:25.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:24 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:24.759+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:26.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:25 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:25.791+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:26.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:25 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:25.791+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:26.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:25 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:25.791+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:27.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:26 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:26.833+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:27.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:26 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:26.833+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:27.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:26 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:26.833+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:28.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:27 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:27.806+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:28.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:27 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:27.806+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:28.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:27 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:27.806+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:28.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:27 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:27.806+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6815 osd.3 since back 2026-03-10T14:13:02.651050+0000 front 2026-03-10T14:13:02.651086+0000 (oldest deadline 2026-03-10T14:13:27.350746+0000) 2026-03-10T14:13:29.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:28 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:28.838+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6803 osd.0 since back 2026-03-10T14:12:47.950087+0000 front 2026-03-10T14:12:47.950149+0000 (oldest deadline 2026-03-10T14:13:12.649884+0000) 2026-03-10T14:13:29.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:28 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:28.838+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6807 osd.1 since back 2026-03-10T14:12:52.650471+0000 front 2026-03-10T14:12:52.650594+0000 (oldest deadline 2026-03-10T14:13:17.350140+0000) 2026-03-10T14:13:29.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:28 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:28.838+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6811 osd.2 since back 2026-03-10T14:12:57.350603+0000 front 2026-03-10T14:12:57.350562+0000 (oldest deadline 2026-03-10T14:13:22.650435+0000) 2026-03-10T14:13:29.174 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:28 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7[73640]: 2026-03-10T14:13:28.838+0000 7fe131501640 -1 osd.7 782 heartbeat_check: no reply from 192.168.123.105:6815 osd.3 since back 2026-03-10T14:13:02.651050+0000 front 2026-03-10T14:13:02.651086+0000 (oldest deadline 2026-03-10T14:13:27.350746+0000) 2026-03-10T14:13:29.857 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90803]: 2026-03-10 14:13:29.597415049 +0000 UTC m=+5.080374985 container died e55cdaf17f0f9d0456a2e04f527cff545f02be9beceebd313c377b0bdfe18d11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-10T14:13:29.857 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90803]: 2026-03-10 14:13:29.718578703 +0000 UTC m=+5.201538639 container remove e55cdaf17f0f9d0456a2e04f527cff545f02be9beceebd313c377b0bdfe18d11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:13:29.857 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local bash[90803]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90880]: 2026-03-10 14:13:29.857955392 +0000 UTC m=+0.018614396 container create 1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90880]: 2026-03-10 14:13:29.89553992 +0000 UTC m=+0.056198934 container init 1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7-deactivate, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, org.label-schema.license=GPLv2) 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90880]: 2026-03-10 14:13:29.901260088 +0000 UTC m=+0.061919092 container start 1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7-deactivate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90880]: 2026-03-10 14:13:29.902409349 +0000 UTC m=+0.063068343 container attach 1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7-deactivate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:29 vm09.local podman[90880]: 2026-03-10 14:13:29.850746698 +0000 UTC m=+0.011405712 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:30 vm09.local conmon[90892]: conmon 1c814af6520089e85456 : Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6.scope/memory.events 2026-03-10T14:13:30.143 INFO:journalctl@ceph.osd.7.vm09.stdout:Mar 10 14:13:30 vm09.local podman[90880]: 2026-03-10 14:13:30.028413556 +0000 UTC m=+0.189072560 container died 1c814af6520089e854569a6633e7bcc78da595a845aac6d0e34b4dba4737bed6 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-osd-7-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2) 2026-03-10T14:13:30.161 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@osd.7.service' 2026-03-10T14:13:30.195 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:30.195 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T14:13:30.195 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-10T14:13:30.195 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@rgw.foo.a 2026-03-10T14:13:30.582 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 14:13:30 vm05.local systemd[1]: Stopping Ceph rgw.foo.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:30.582 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 14:13:30 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a[83421]: 2026-03-10T14:13:30.297+0000 7f6b563c0640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:13:30.582 INFO:journalctl@ceph.rgw.foo.a.vm05.stdout:Mar 10 14:13:30 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-rgw-foo-a[83421]: 2026-03-10T14:13:30.297+0000 7f6b59c2f980 -1 shutting down 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.252Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.253Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.253Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.253Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.253Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:32.615 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:32 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:32.254Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T14:13:40.503 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@rgw.foo.a.service' 2026-03-10T14:13:40.533 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:40.533 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-10T14:13:40.533 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T14:13:40.533 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@prometheus.a 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local systemd[1]: Stopping Ceph prometheus.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.636Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.637Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.638Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.638Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a[81484]: ts=2026-03-10T14:13:40.638Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local podman[91003]: 2026-03-10 14:13:40.648150935 +0000 UTC m=+0.027002429 container died 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local podman[91003]: 2026-03-10 14:13:40.767325376 +0000 UTC m=+0.146176870 container remove 701a78c74ffd72dd32dfd6abdd9bc5cdffaf29c1f5bc4782d3c38311b37a1436 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:40.831 INFO:journalctl@ceph.prometheus.a.vm09.stdout:Mar 10 14:13:40 vm09.local bash[91003]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-prometheus-a 2026-03-10T14:13:40.843 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@prometheus.a.service' 2026-03-10T14:13:40.873 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:13:40.873 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T14:13:40.873 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm rm-cluster --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 --force --keep-logs 2026-03-10T14:13:41.011 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T14:13:42.582 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local systemd[1]: Stopping Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:42.582 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a[90378]: ts=2026-03-10T14:13:42.532Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T14:13:42.582 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local podman[168946]: 2026-03-10 14:13:42.535149832 +0000 UTC m=+0.021035739 container died 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local podman[168946]: 2026-03-10 14:13:42.687650671 +0000 UTC m=+0.173536578 container remove 485d9e5ae1f7994227f8f5bc7837ba9a18804889f2efb71cc68ee40ae4f1b351 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local podman[168946]: 2026-03-10 14:13:42.688938393 +0000 UTC m=+0.174824300 volume remove 13c89785b291a5cbbaae89e1b8b7c0ff9985e64daf7c99a40b2d45bee0970e09 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local bash[168946]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-alertmanager-a 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@alertmanager.a.service: Deactivated successfully. 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local systemd[1]: Stopped Ceph alertmanager.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:13:42.935 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 14:13:42 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@alertmanager.a.service: Consumed 1.531s CPU time. 2026-03-10T14:13:43.187 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:42 vm05.local systemd[1]: Stopping Ceph node-exporter.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:43.187 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local podman[169056]: 2026-03-10 14:13:43.00244835 +0000 UTC m=+0.016927608 container died 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:43.187 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local podman[169056]: 2026-03-10 14:13:43.134224499 +0000 UTC m=+0.148703757 container remove 166a8094f2e341c4a4b37b2d684ef28e8c69849e55e4950f368ae439bdf8f319 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-10T14:13:43.187 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local bash[169056]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-a 2026-03-10T14:13:43.187 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T14:13:43.450 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T14:13:43.450 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local systemd[1]: Stopped Ceph node-exporter.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:13:43.450 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 14:13:43 vm05.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.a.service: Consumed 2.300s CPU time. 2026-03-10T14:13:43.800 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 --force --keep-logs 2026-03-10T14:13:43.926 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T14:13:45.140 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:45 vm09.local systemd[1]: Stopping Ceph iscsi.iscsi.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:45.424 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:45 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a[78165]: debug Shutdown received 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local bash[91431]: time="2026-03-10T14:13:55Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a in 10 seconds, resorting to SIGKILL" 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local podman[91431]: 2026-03-10 14:13:55.225646828 +0000 UTC m=+10.026543602 container died f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2) 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local podman[91431]: 2026-03-10 14:13:55.343111267 +0000 UTC m=+10.144008041 container remove f92110db3ce56983e268366da8249f5883e9c249c1139b949014ae223ffd1f43 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local bash[91431]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-iscsi-iscsi-a 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local systemd[1]: Stopped Ceph iscsi.iscsi.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:13:55.480 INFO:journalctl@ceph.iscsi.iscsi.a.vm09.stdout:Mar 10 14:13:55 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@iscsi.iscsi.a.service: Consumed 2.324s CPU time. 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: Stopping Ceph grafana.a for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=server t=2026-03-10T14:13:56.15816938Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=ticker t=2026-03-10T14:13:56.158731684Z level=info msg=stopped last_tick=2026-03-10T14:13:50Z 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=grafana-apiserver t=2026-03-10T14:13:56.158749277Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a[80421]: logger=tracing t=2026-03-10T14:13:56.159130891Z level=info msg="Closing tracing" 2026-03-10T14:13:56.363 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local podman[91701]: 2026-03-10 14:13:56.169026564 +0000 UTC m=+0.024448548 container died 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a, maintainer=Grafana Labs ) 2026-03-10T14:13:56.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local podman[91701]: 2026-03-10 14:13:56.299014328 +0000 UTC m=+0.154436321 container remove 68dee2ed99826e4ce4719167423a6b1b97d2929e3bb0fd1efb8cbfb3ea841a61 (image=quay.io/ceph/grafana:10.4.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a, maintainer=Grafana Labs ) 2026-03-10T14:13:56.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local bash[91701]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-grafana-a 2026-03-10T14:13:56.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@grafana.a.service: Deactivated successfully. 2026-03-10T14:13:56.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: Stopped Ceph grafana.a for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:13:56.364 INFO:journalctl@ceph.grafana.a.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@grafana.a.service: Consumed 10.675s CPU time. 2026-03-10T14:13:56.670 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: Stopping Ceph node-exporter.b for e063dc72-1c85-11f1-a098-09993c5c5b66... 2026-03-10T14:13:56.670 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local podman[91811]: 2026-03-10 14:13:56.561237251 +0000 UTC m=+0.015260790 container died e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local podman[91811]: 2026-03-10 14:13:56.676791866 +0000 UTC m=+0.130815414 container remove e48e92c6aac7416aa8d9f313b3bc775431de36a8bd6b6bd51c0981113cd62a0e (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local bash[91811]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66-node-exporter-b 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: Stopped Ceph node-exporter.b for e063dc72-1c85-11f1-a098-09993c5c5b66. 2026-03-10T14:13:56.924 INFO:journalctl@ceph.node-exporter.b.vm09.stdout:Mar 10 14:13:56 vm09.local systemd[1]: ceph-e063dc72-1c85-11f1-a098-09993c5c5b66@node-exporter.b.service: Consumed 2.268s CPU time. 2026-03-10T14:13:57.339 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:13:57.364 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:13:57.398 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T14:13:57.399 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm05/crash 2026-03-10T14:13:57.399 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash -- . 2026-03-10T14:13:57.428 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash: Cannot open: No such file or directory 2026-03-10T14:13:57.428 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-10T14:13:57.429 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm09/crash 2026-03-10T14:13:57.429 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash -- . 2026-03-10T14:13:57.474 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/crash: Cannot open: No such file or directory 2026-03-10T14:13:57.474 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T14:13:57.475 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T14:13:57.475 DEBUG:teuthology.orchestra.run.vm05:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-10T14:13:57.508 INFO:tasks.cephadm:Compressing logs... 2026-03-10T14:13:57.508 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:13:57.550 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:13:57.571 INFO:teuthology.orchestra.run.vm05.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T14:13:57.571 INFO:teuthology.orchestra.run.vm05.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T14:13:57.572 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.a.log 2026-03-10T14:13:57.573 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log 2026-03-10T14:13:57.573 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log 2026-03-10T14:13:57.576 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log: /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.a.log: 92.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T14:13:57.576 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.y.log 2026-03-10T14:13:57.577 INFO:teuthology.orchestra.run.vm05.stderr: 93.4% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log.gz 2026-03-10T14:13:57.583 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log 2026-03-10T14:13:57.584 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log 2026-03-10T14:13:57.587 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T14:13:57.587 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T14:13:57.588 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log 2026-03-10T14:13:57.588 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.b.log 2026-03-10T14:13:57.589 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log: /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.y.log: 88.7% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log.gz 2026-03-10T14:13:57.590 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.c.log 2026-03-10T14:13:57.590 INFO:teuthology.orchestra.run.vm05.stderr: 95.2% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log.gz 2026-03-10T14:13:57.596 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log 2026-03-10T14:13:57.598 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.0.log 2026-03-10T14:13:57.604 INFO:teuthology.orchestra.run.vm09.stderr: 91.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T14:13:57.604 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log 2026-03-10T14:13:57.606 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.c.log: 94.9% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log.gz 2026-03-10T14:13:57.606 INFO:teuthology.orchestra.run.vm09.stderr: 80.2% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.cephadm.log.gz 2026-03-10T14:13:57.606 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.1.log 2026-03-10T14:13:57.608 INFO:teuthology.orchestra.run.vm09.stderr: 94.9% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-volume.log.gz 2026-03-10T14:13:57.608 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log 2026-03-10T14:13:57.611 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.x.log 2026-03-10T14:13:57.615 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.2.log 2026-03-10T14:13:57.616 INFO:teuthology.orchestra.run.vm09.stderr: 88.1% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.log.gz 2026-03-10T14:13:57.620 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.3.log 2026-03-10T14:13:57.626 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.4.log 2026-03-10T14:13:57.627 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-client.rgw.foo.a.log 2026-03-10T14:13:57.627 INFO:teuthology.orchestra.run.vm09.stderr: 92.2% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph.audit.log.gz 2026-03-10T14:13:57.629 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.5.log 2026-03-10T14:13:57.630 INFO:teuthology.orchestra.run.vm09.stderr: 92.5% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.x.log.gz 2026-03-10T14:13:57.634 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.4.log: /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.6.log 2026-03-10T14:13:57.644 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.7.log 2026-03-10T14:13:57.651 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/tcmu-runner.log 2026-03-10T14:13:57.656 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.7.log: /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/tcmu-runner.log: 63.5% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/tcmu-runner.log.gz 2026-03-10T14:13:57.746 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.3.log: /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-client.rgw.foo.a.log: 93.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-client.rgw.foo.a.log.gz 2026-03-10T14:13:58.347 INFO:teuthology.orchestra.run.vm05.stderr: 90.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mgr.y.log.gz 2026-03-10T14:13:59.875 INFO:teuthology.orchestra.run.vm09.stderr: 92.1% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.b.log.gz 2026-03-10T14:14:00.029 INFO:teuthology.orchestra.run.vm05.stderr: 91.8% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.c.log.gz 2026-03-10T14:14:02.138 INFO:teuthology.orchestra.run.vm05.stderr: 91.5% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-mon.a.log.gz 2026-03-10T14:14:07.949 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.5.log.gz 2026-03-10T14:14:07.963 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.6.log.gz 2026-03-10T14:14:08.347 INFO:teuthology.orchestra.run.vm09.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.4.log.gz 2026-03-10T14:14:08.361 INFO:teuthology.orchestra.run.vm09.stderr: 94.5% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.7.log.gz 2026-03-10T14:14:08.363 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T14:14:08.363 INFO:teuthology.orchestra.run.vm09.stderr:real 0m10.791s 2026-03-10T14:14:08.363 INFO:teuthology.orchestra.run.vm09.stderr:user 0m20.091s 2026-03-10T14:14:08.363 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m1.074s 2026-03-10T14:14:08.622 INFO:teuthology.orchestra.run.vm05.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.2.log.gz 2026-03-10T14:14:08.711 INFO:teuthology.orchestra.run.vm05.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.0.log.gz 2026-03-10T14:14:08.935 INFO:teuthology.orchestra.run.vm05.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.1.log.gz 2026-03-10T14:14:09.032 INFO:teuthology.orchestra.run.vm05.stderr: 94.6% -- replaced with /var/log/ceph/e063dc72-1c85-11f1-a098-09993c5c5b66/ceph-osd.3.log.gz 2026-03-10T14:14:09.034 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T14:14:09.034 INFO:teuthology.orchestra.run.vm05.stderr:real 0m11.472s 2026-03-10T14:14:09.034 INFO:teuthology.orchestra.run.vm05.stderr:user 0m21.582s 2026-03-10T14:14:09.034 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m1.221s 2026-03-10T14:14:09.034 INFO:tasks.cephadm:Archiving logs... 2026-03-10T14:14:09.034 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm05/log 2026-03-10T14:14:09.035 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T14:14:10.124 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm09/log 2026-03-10T14:14:10.124 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T14:14:11.104 INFO:tasks.cephadm:Removing cluster... 2026-03-10T14:14:11.104 DEBUG:teuthology.orchestra.run.vm05:> sudo cephadm rm-cluster --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 --force 2026-03-10T14:14:11.237 INFO:teuthology.orchestra.run.vm05.stdout:Deleting cluster with fsid: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T14:14:11.631 DEBUG:teuthology.orchestra.run.vm09:> sudo cephadm rm-cluster --fsid e063dc72-1c85-11f1-a098-09993c5c5b66 --force 2026-03-10T14:14:11.761 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: e063dc72-1c85-11f1-a098-09993c5c5b66 2026-03-10T14:14:12.113 INFO:tasks.cephadm:Teardown complete 2026-03-10T14:14:12.114 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T14:14:12.116 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T14:14:12.116 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T14:14:12.118 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T14:14:12.153 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T14:14:12.153 DEBUG:teuthology.orchestra.run.vm05:> 2026-03-10T14:14:12.153 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T14:14:12.153 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y remove $d || true 2026-03-10T14:14:12.153 DEBUG:teuthology.orchestra.run.vm05:> done 2026-03-10T14:14:12.158 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T14:14:12.158 DEBUG:teuthology.orchestra.run.vm09:> 2026-03-10T14:14:12.158 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T14:14:12.158 DEBUG:teuthology.orchestra.run.vm09:> sudo yum -y remove $d || true 2026-03-10T14:14:12.158 DEBUG:teuthology.orchestra.run.vm09:> done 2026-03-10T14:14:12.397 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 39 M 2026-03-10T14:14:12.398 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:12.403 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:12.403 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:12.413 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-10T14:14:12.414 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:12.415 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 39 M 2026-03-10T14:14:12.415 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:12.418 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:12.418 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:12.419 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:12.419 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:12.435 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:12.436 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:12.452 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:12.468 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T14:14:12.477 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:12.479 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.490 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.490 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:12.491 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T14:14:12.491 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T14:14:12.491 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T14:14:12.491 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:12.493 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.622 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.628 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.694 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.695 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.763 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.763 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.772 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.772 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:12.913 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:12.999 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.218 INFO:teuthology.orchestra.run.vm09.stdout:Remove 4 Packages 2026-03-10T14:14:13.219 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.219 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 212 M 2026-03-10T14:14:13.219 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:13.220 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout:Remove 4 Packages 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 212 M 2026-03-10T14:14:13.221 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:13.222 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:13.222 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:13.224 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:13.224 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:13.253 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:13.253 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:13.257 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:13.257 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:13.322 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:13.323 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:13.339 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T14:14:13.339 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T14:14:13.341 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T14:14:13.346 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T14:14:13.349 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T14:14:13.351 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T14:14:13.371 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T14:14:13.371 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T14:14:13.446 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T14:14:13.446 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T14:14:13.446 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T14:14:13.446 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T14:14:13.448 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T14:14:13.448 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T14:14:13.449 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T14:14:13.449 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.518 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.519 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:13.763 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout:Remove 8 Packages 2026-03-10T14:14:13.764 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Remove 8 Packages 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 28 M 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 28 M 2026-03-10T14:14:13.765 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:13.767 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:13.767 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:13.768 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:13.768 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:13.801 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:13.802 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:13.802 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:13.802 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:13.853 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:13.861 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:13.861 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T14:14:13.866 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T14:14:13.867 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T14:14:13.869 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T14:14:13.871 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T14:14:13.872 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T14:14:13.873 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T14:14:13.876 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T14:14:13.876 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T14:14:13.878 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T14:14:13.879 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T14:14:13.880 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.901 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T14:14:13.902 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.903 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.911 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.911 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:13.933 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:13.934 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T14:14:14.026 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T14:14:14.032 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.082 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: zip-3.0-35.el9.x86_64 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.093 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:14.308 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T14:14:14.315 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T14:14:14.316 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout:=========================================================================================== 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout:Remove 102 Packages 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 613 M 2026-03-10T14:14:14.317 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:14.319 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T14:14:14.325 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T14:14:14.326 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T14:14:14.327 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout:=========================================================================================== 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout:Remove 102 Packages 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 613 M 2026-03-10T14:14:14.328 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:14.346 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:14.346 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:14.355 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:14.355 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:14.477 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:14.478 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:14.478 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:14.478 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:14.658 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:14.658 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T14:14:14.666 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T14:14:14.672 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:14.672 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T14:14:14.683 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T14:14:14.688 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.689 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.703 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T14:14:14.707 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.708 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.724 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:14.725 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T14:14:14.725 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T14:14:14.750 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T14:14:14.750 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T14:14:14.783 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T14:14:14.793 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T14:14:14.798 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T14:14:14.798 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:14.810 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T14:14:14.812 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:14.820 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T14:14:14.820 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T14:14:14.824 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T14:14:14.825 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T14:14:14.825 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:14.834 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T14:14:14.839 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:14.839 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T14:14:14.846 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T14:14:14.850 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T14:14:14.860 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T14:14:14.863 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.864 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T14:14:14.865 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.874 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T14:14:14.884 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.887 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.889 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.889 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.889 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T14:14:14.889 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:14.896 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:14.897 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.907 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.909 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T14:14:14.913 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.913 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:14.913 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T14:14:14.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:14.913 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T14:14:14.917 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T14:14:14.922 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.925 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T14:14:14.932 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T14:14:14.935 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T14:14:14.938 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T14:14:14.940 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T14:14:14.944 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T14:14:14.945 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T14:14:14.954 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T14:14:14.954 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T14:14:14.960 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T14:14:14.967 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T14:14:14.973 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T14:14:14.984 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T14:14:14.988 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T14:14:14.991 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T14:14:14.996 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T14:14:14.999 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T14:14:15.008 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T14:14:15.015 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T14:14:15.016 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T14:14:15.019 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T14:14:15.024 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T14:14:15.025 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T14:14:15.028 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T14:14:15.037 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T14:14:15.044 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T14:14:15.044 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T14:14:15.052 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T14:14:15.116 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T14:14:15.134 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T14:14:15.138 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T14:14:15.149 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.149 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T14:14:15.149 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:15.151 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.155 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T14:14:15.168 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.168 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T14:14:15.169 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:15.170 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.181 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.196 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T14:14:15.198 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T14:14:15.204 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T14:14:15.207 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T14:14:15.209 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T14:14:15.211 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T14:14:15.216 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T14:14:15.219 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T14:14:15.222 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T14:14:15.230 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:15.231 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:15.242 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.243 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.247 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T14:14:15.249 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T14:14:15.252 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T14:14:15.254 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T14:14:15.255 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T14:14:15.258 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T14:14:15.259 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T14:14:15.261 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T14:14:15.263 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T14:14:15.263 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T14:14:15.266 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T14:14:15.268 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T14:14:15.270 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T14:14:15.275 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T14:14:15.279 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T14:14:15.322 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T14:14:15.322 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T14:14:15.335 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T14:14:15.336 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T14:14:15.338 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T14:14:15.339 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T14:14:15.341 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T14:14:15.342 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T14:14:15.343 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T14:14:15.344 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T14:14:15.346 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T14:14:15.348 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T14:14:15.349 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T14:14:15.350 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T14:14:15.370 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.370 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:15.370 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T14:14:15.370 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:15.371 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.372 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.372 INFO:teuthology.orchestra.run.vm09.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T14:14:15.372 INFO:teuthology.orchestra.run.vm09.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T14:14:15.372 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:15.372 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.379 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.380 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T14:14:15.381 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T14:14:15.382 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T14:14:15.383 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T14:14:15.384 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T14:14:15.385 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T14:14:15.386 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T14:14:15.388 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T14:14:15.389 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T14:14:15.390 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T14:14:15.391 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T14:14:15.392 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T14:14:15.394 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T14:14:15.395 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T14:14:15.396 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T14:14:15.398 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T14:14:15.399 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T14:14:15.406 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T14:14:15.407 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T14:14:15.410 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T14:14:15.411 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T14:14:15.412 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T14:14:15.413 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T14:14:15.414 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T14:14:15.416 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T14:14:15.417 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T14:14:15.419 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T14:14:15.423 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T14:14:15.424 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T14:14:15.426 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T14:14:15.428 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T14:14:15.432 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T14:14:15.434 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T14:14:15.435 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T14:14:15.438 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T14:14:15.441 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T14:14:15.444 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T14:14:15.444 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T14:14:15.447 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T14:14:15.448 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T14:14:15.450 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T14:14:15.451 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T14:14:15.454 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T14:14:15.455 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T14:14:15.458 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T14:14:15.459 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T14:14:15.461 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T14:14:15.463 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T14:14:15.466 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T14:14:15.469 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T14:14:15.474 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T14:14:15.476 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T14:14:15.477 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T14:14:15.480 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T14:14:15.482 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T14:14:15.484 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T14:14:15.488 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T14:14:15.488 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T14:14:15.490 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T14:14:15.492 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T14:14:15.492 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T14:14:15.498 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T14:14:15.503 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T14:14:15.512 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.512 INFO:teuthology.orchestra.run.vm05.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T14:14:15.512 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:15.519 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.524 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.525 INFO:teuthology.orchestra.run.vm09.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T14:14:15.525 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:15.531 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.547 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.547 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T14:14:15.557 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T14:14:15.557 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T14:14:15.561 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T14:14:15.566 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T14:14:15.568 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T14:14:15.569 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T14:14:15.571 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T14:14:15.572 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T14:14:15.572 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T14:14:15.575 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T14:14:15.577 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T14:14:15.577 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /sys 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /proc 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /mnt 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /var/tmp 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /home 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /root 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout:skipping the directory /tmp 2026-03-10T14:14:21.408 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:21.422 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-10T14:14:21.439 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:21.443 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.443 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.452 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T14:14:21.452 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.455 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T14:14:21.458 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T14:14:21.461 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T14:14:21.463 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T14:14:21.463 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T14:14:21.469 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.469 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.479 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T14:14:21.479 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T14:14:21.482 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T14:14:21.484 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T14:14:21.485 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T14:14:21.487 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T14:14:21.488 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T14:14:21.489 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T14:14:21.491 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T14:14:21.493 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T14:14:21.493 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T14:14:21.499 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T14:14:21.507 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T14:14:21.510 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T14:14:21.514 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T14:14:21.515 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.515 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T14:14:21.518 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T14:14:21.522 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T14:14:21.525 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T14:14:21.531 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T14:14:21.540 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T14:14:21.546 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T14:14:21.546 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T14:14:21.644 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T14:14:21.645 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T14:14:21.645 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:21.645 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T14:14:21.645 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T14:14:21.646 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T14:14:21.647 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T14:14:21.649 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T14:14:21.650 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T14:14:21.660 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T14:14:21.661 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T14:14:21.662 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T14:14:21.663 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T14:14:21.664 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.776 INFO:teuthology.orchestra.run.vm09.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T14:14:21.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T14:14:21.778 INFO:teuthology.orchestra.run.vm09.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm09.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.779 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.780 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T14:14:21.781 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T14:14:21.782 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T14:14:21.783 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:21.783 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:21.783 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 775 k 2026-03-10T14:14:21.997 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:21.999 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:21.999 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:22.000 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:22.001 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:22.016 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 775 k 2026-03-10T14:14:22.017 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:22.018 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:22.019 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:22.020 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:22.020 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:22.178 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:22.179 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.239 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:22.239 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.571 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.578 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.617 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.618 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:22.618 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:22.618 INFO:teuthology.orchestra.run.vm09.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:22.618 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:22.618 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:22.625 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:22.816 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T14:14:22.817 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:22.820 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:22.821 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:22.821 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:22.831 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T14:14:22.831 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:22.835 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:22.835 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:22.836 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:23.000 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr 2026-03-10T14:14:23.000 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:23.004 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:23.004 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:23.004 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:23.026 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr 2026-03-10T14:14:23.026 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:23.029 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:23.030 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:23.030 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:23.179 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T14:14:23.179 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:23.183 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:23.183 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:23.183 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:23.210 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T14:14:23.210 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:23.214 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:23.214 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:23.214 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:23.358 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T14:14:23.358 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:23.361 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:23.362 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:23.362 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:23.441 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T14:14:23.625 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:23.625 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:23.625 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:23.625 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:23.844 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-rook 2026-03-10T14:14:23.844 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:23.848 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:23.849 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:23.849 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:23.919 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-rook 2026-03-10T14:14:23.949 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:23.949 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:23.949 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:23.949 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:24.163 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T14:14:24.163 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:24.166 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:24.167 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:24.167 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:24.169 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T14:14:24.170 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:24.174 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:24.174 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:24.174 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:24.372 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:Remove 1 Package 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 3.6 M 2026-03-10T14:14:24.373 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:24.375 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:24.375 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:24.385 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:24.385 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:24.450 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:24.604 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:Remove 1 Package 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.6 M 2026-03-10T14:14:24.605 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:24.607 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:24.607 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:24.607 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:24.617 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:24.617 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:24.734 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:24.759 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:24.970 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:25.121 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:25.151 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:25.198 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:25.325 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: ceph-volume 2026-03-10T14:14:25.325 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:25.329 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:25.329 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:25.330 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:25.397 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: ceph-volume 2026-03-10T14:14:25.397 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:25.397 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:25.397 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:25.397 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:25.719 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repo Size 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:25.720 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:25.721 INFO:teuthology.orchestra.run.vm05.stdout:Remove 2 Packages 2026-03-10T14:14:25.721 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:25.721 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 610 k 2026-03-10T14:14:25.721 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:25.722 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:25.722 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:25.732 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:25.733 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:25.757 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:25.757 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:25.757 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Remove 2 Packages 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 610 k 2026-03-10T14:14:25.758 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:25.759 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:25.759 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:25.769 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:25.769 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:25.837 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:25.862 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:25.887 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:25.933 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:25.948 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:26.023 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:26.114 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:26.114 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:26.185 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:26.185 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.261 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repo Size 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T14:14:26.452 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout:Remove 3 Packages 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 3.7 M 2026-03-10T14:14:26.453 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:26.454 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:26.454 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:26.472 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:26.473 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:26.594 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:26.611 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T14:14:26.637 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T14:14:26.638 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:26.765 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:26.765 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T14:14:26.765 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T14:14:26.774 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repo Size 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T14:14:26.777 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Remove 3 Packages 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 3.7 M 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:26.778 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:26.793 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:26.799 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:26.897 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:26.952 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:27.040 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T14:14:27.098 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T14:14:27.113 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:27.249 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: libcephfs-devel 2026-03-10T14:14:27.250 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:27.266 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:27.267 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:27.267 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:27.387 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:27.387 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T14:14:27.387 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T14:14:27.519 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout:Removing: 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout:Removing dependent packages: 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T14:14:27.520 INFO:teuthology.orchestra.run.vm05.stdout:Removing unused dependencies: 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout:================================================================================ 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout:Remove 20 Packages 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout:Freed space: 79 M 2026-03-10T14:14:27.521 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-10T14:14:27.525 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-10T14:14:27.525 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-10T14:14:27.545 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-10T14:14:27.545 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:27.600 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:27.730 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-10T14:14:27.733 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T14:14:27.736 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T14:14:27.738 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T14:14:27.738 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T14:14:27.752 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T14:14:27.755 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T14:14:27.757 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T14:14:27.759 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T14:14:27.761 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T14:14:27.764 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T14:14:27.764 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:27.777 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:27.777 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T14:14:27.777 INFO:teuthology.orchestra.run.vm05.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T14:14:27.777 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:27.789 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T14:14:27.791 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T14:14:27.795 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T14:14:27.798 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T14:14:27.801 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T14:14:27.803 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T14:14:27.805 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T14:14:27.807 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T14:14:27.810 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T14:14:27.825 INFO:teuthology.orchestra.run.vm05.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T14:14:27.871 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: libcephfs-devel 2026-03-10T14:14:27.871 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:27.875 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:27.876 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:27.876 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:27.890 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T14:14:27.890 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T14:14:27.891 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T14:14:27.936 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout:Removed: 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T14:14:27.937 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:28.082 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Removing: 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Removing dependent packages: 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Removing unused dependencies: 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Remove 20 Packages 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Freed space: 79 M 2026-03-10T14:14:28.084 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T14:14:28.088 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T14:14:28.088 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T14:14:28.109 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T14:14:28.109 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T14:14:28.149 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: librbd1 2026-03-10T14:14:28.149 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:28.151 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:28.152 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T14:14:28.153 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:28.153 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:28.155 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T14:14:28.158 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T14:14:28.160 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T14:14:28.160 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T14:14:28.174 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T14:14:28.179 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T14:14:28.181 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T14:14:28.183 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T14:14:28.184 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T14:14:28.186 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T14:14:28.186 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:28.205 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:28.205 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T14:14:28.205 INFO:teuthology.orchestra.run.vm09.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T14:14:28.205 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:28.222 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T14:14:28.225 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T14:14:28.228 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T14:14:28.231 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T14:14:28.234 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T14:14:28.237 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T14:14:28.240 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T14:14:28.254 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T14:14:28.330 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rados 2026-03-10T14:14:28.330 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:28.332 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:28.333 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:28.333 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:28.350 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T14:14:28.460 INFO:teuthology.orchestra.run.vm09.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T14:14:28.538 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T14:14:28.539 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T14:14:28.614 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rgw 2026-03-10T14:14:28.614 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:28.616 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:28.617 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:28.617 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout:Removed: 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.686 INFO:teuthology.orchestra.run.vm09.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T14:14:28.687 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:28.831 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-cephfs 2026-03-10T14:14:28.831 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:28.833 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:28.834 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:28.834 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:28.883 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: librbd1 2026-03-10T14:14:28.883 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:28.885 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:28.886 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:28.886 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:28.995 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: python3-rbd 2026-03-10T14:14:28.995 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:28.998 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:28.998 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:28.998 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:29.056 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rados 2026-03-10T14:14:29.057 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.058 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.059 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.059 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:29.157 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-fuse 2026-03-10T14:14:29.157 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:29.159 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:29.159 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:29.159 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:29.216 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rgw 2026-03-10T14:14:29.216 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.219 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.220 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.220 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:29.319 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-mirror 2026-03-10T14:14:29.319 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:29.322 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:29.323 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:29.323 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:29.384 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-cephfs 2026-03-10T14:14:29.384 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.386 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.386 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.386 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:29.490 INFO:teuthology.orchestra.run.vm05.stdout:No match for argument: rbd-nbd 2026-03-10T14:14:29.490 INFO:teuthology.orchestra.run.vm05.stderr:No packages marked for removal. 2026-03-10T14:14:29.492 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-10T14:14:29.493 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-10T14:14:29.493 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-10T14:14:29.514 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-10T14:14:29.546 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: python3-rbd 2026-03-10T14:14:29.547 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.548 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.549 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.549 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:29.637 INFO:teuthology.orchestra.run.vm05.stdout:56 files removed 2026-03-10T14:14:29.661 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T14:14:29.684 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean expire-cache 2026-03-10T14:14:29.707 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-fuse 2026-03-10T14:14:29.707 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.709 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.709 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.709 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:29.835 INFO:teuthology.orchestra.run.vm05.stdout:Cache was expired 2026-03-10T14:14:29.835 INFO:teuthology.orchestra.run.vm05.stdout:0 files removed 2026-03-10T14:14:29.857 DEBUG:teuthology.parallel:result is None 2026-03-10T14:14:29.877 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-mirror 2026-03-10T14:14:29.877 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:29.879 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:29.880 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:29.880 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:30.034 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: rbd-nbd 2026-03-10T14:14:30.034 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T14:14:30.036 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T14:14:30.037 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T14:14:30.037 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T14:14:30.059 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean all 2026-03-10T14:14:30.180 INFO:teuthology.orchestra.run.vm09.stdout:56 files removed 2026-03-10T14:14:30.200 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T14:14:30.223 DEBUG:teuthology.orchestra.run.vm09:> sudo yum clean expire-cache 2026-03-10T14:14:30.389 INFO:teuthology.orchestra.run.vm09.stdout:Cache was expired 2026-03-10T14:14:30.389 INFO:teuthology.orchestra.run.vm09.stdout:0 files removed 2026-03-10T14:14:30.410 DEBUG:teuthology.parallel:result is None 2026-03-10T14:14:30.410 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm05.local 2026-03-10T14:14:30.410 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-10T14:14:30.410 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T14:14:30.410 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T14:14:30.438 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T14:14:30.439 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T14:14:30.506 DEBUG:teuthology.parallel:result is None 2026-03-10T14:14:30.507 DEBUG:teuthology.parallel:result is None 2026-03-10T14:14:30.507 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T14:14:30.509 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T14:14:30.509 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:14:30.549 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:14:30.561 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:14:30.563 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:14:30.641 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm09.stdout:^* 148.0.90.77.hostbrr.com 2 8 377 194 -836us[-1128us] +/- 15ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm09.stdout:^+ sambuca.psychonet.co.uk 2 8 377 260 +386us[ +94us] +/- 26ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm09.stdout:^+ ntp.kernfusion.at 2 8 377 57 -2523us[-2523us] +/- 28ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm09.stdout:^+ 172-104-154-182.ip.linod> 2 6 377 63 +5185us[+5185us] +/- 31ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:^+ ntp.kernfusion.at 2 8 377 63 -2554us[-2554us] +/- 28ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:^+ 172-104-154-182.ip.linod> 2 6 377 62 +5082us[+5082us] +/- 31ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:^* 148.0.90.77.hostbrr.com 2 8 377 256 -978us[-1122us] +/- 16ms 2026-03-10T14:14:30.642 INFO:teuthology.orchestra.run.vm05.stdout:^+ sambuca.psychonet.co.uk 2 8 377 60 +277us[ +277us] +/- 26ms 2026-03-10T14:14:30.643 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T14:14:30.645 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T14:14:30.645 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T14:14:30.647 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T14:14:30.649 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T14:14:30.651 INFO:teuthology.task.internal:Duration was 2567.191209 seconds 2026-03-10T14:14:30.651 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T14:14:30.653 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T14:14:30.653 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T14:14:30.685 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T14:14:30.725 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:14:30.727 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:14:31.107 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T14:14:31.108 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-10T14:14:31.108 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T14:14:31.170 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T14:14:31.171 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T14:14:31.195 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T14:14:31.195 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:14:31.212 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:14:31.720 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T14:14:31.720 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:14:31.721 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:14:31.742 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:14:31.742 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:14:31.742 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T14:14:31.742 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:14:31.743 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T14:14:31.745 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:14:31.745 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:14:31.745 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T14:14:31.745 INFO:teuthology.orchestra.run.vm09.stderr: /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:14:31.746 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T14:14:31.868 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T14:14:31.891 INFO:teuthology.orchestra.run.vm05.stderr: 97.6% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T14:14:31.894 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T14:14:31.896 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T14:14:31.896 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T14:14:31.957 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T14:14:31.981 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T14:14:31.983 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:14:31.999 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:14:32.023 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-10T14:14:32.044 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T14:14:32.058 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:14:32.089 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:14:32.090 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:14:32.113 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:14:32.114 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T14:14:32.116 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T14:14:32.116 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm05 2026-03-10T14:14:32.116 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T14:14:32.161 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1052/remote/vm09 2026-03-10T14:14:32.161 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T14:14:32.188 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T14:14:32.188 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T14:14:32.199 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T14:14:32.242 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T14:14:32.253 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T14:14:32.253 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T14:14:32.301 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T14:14:32.301 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T14:14:32.302 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T14:14:32.316 INFO:teuthology.orchestra.run.vm05.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 14:14 /home/ubuntu/cephtest 2026-03-10T14:14:32.318 INFO:teuthology.orchestra.run.vm09.stdout: 8532152 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 14:14 /home/ubuntu/cephtest 2026-03-10T14:14:32.319 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T14:14:32.350 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} duration: 2567.1912093162537 flavor: default owner: kyr success: true 2026-03-10T14:14:32.350 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T14:14:32.368 INFO:teuthology.run:pass